Emotional support conversation namely, the ESC system aims to provide comforting and helpful responses to reduce the emotional intensity of users in distress. However, developing ESC systems is challenging due to the need for both contextual understanding and nuanced evaluation. Existing approaches either compromise dialog context-size by splitting whole dialogs into small blocks containing a few utterances or by adding excessive external knowledge for emotional reasoning that hinders nuances like fluency. Thus, finding the balance of appropriate context-size and external knowledge for emotional response generation is an important task. Therefore, we experimented with TinyLlama-1B [1] by controlling dialog context-size and different external knowledge, finally, we pioneer the use of rubric-based evaluation on ESC tasks with Prometheus [2] which is on par with GPT-4 [3], which can assess long-form text based on user-defined scoring rubrics and is more cost-effective than human evaluation. From the observation, we found that including the whole context size is more efficient, and additional of more external knowledge decreases the model performance in several evaluating metrics. In conclusion, this paper presents an analysis of the Emotional Support Conversation with varying context-size and external knowledge and provides the pipeline to generate responses as well as evaluate the generated responses in terms of customized rubrics.