Efficient and Responsible Transformer Based Conversational Agents for Emotionally Supportive Dialogue
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Conversational agents designed for emotionally supportive interactions face challenges in balancing affective responsiveness, computational efficiency, and safety in communication. Prior approaches frequently depend on large-scale models, handcrafted affective objectives, or reinforcement learning from human feedback, which can limit scalability and interpretability. This work presents a lightweight, domain-adapted dialogue generation system based on the T5-small architecture, fine-tuned on MentalChat16K, a curated corpus of real and synthetic emotional-support conversations. The proposed model operates without reinforcement learning or emotion-specific training objectives, yet demonstrates strong alignment with affective cues and high response fluency. Empirical evaluation shows improvements over zero-shot and fine-tuned GPT-2 baselines, achieving BLEU (32.14), ROUGE-L (44.72), and BERTScore-F1 (85.11). Expert human assessments confirm high ratings in coherence, emotional appropriateness, and contextual relevance, with substantial inter-rater agreement. Qualitative error analysis indicates conservative and context-aware responses, with no hallucinations or unsafe content. The system is deployed via a browser-based Gradio interface supporting both CPU and GPU inference, featuring usage disclaimers and non-clinical positioning to ensure responsible deployment. This study demonstrates that compact transformer-based models, when adapted to domain-specific corpora and evaluated comprehensively, can enable efficient, affectively competent conversational systems suitable for large-scale, safe deployment in emotionally supportive dialogue scenarios.