Optimizing Positive Content Generation in Prompt-based Abstractive Summarization with Large Language Models
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The increasing demand for automated summarization systems that can generate content with not only factual accuracy but also emotional alignment has led to significant interest in developing models that can control both tone and content coherence. Introducing a novel approach to fine-tuning text generation models, this work focuses on the use of prompt engineering to steer output toward positive sentiment while preserving the essential elements of the source text. By combining sentiment-aware prompts with an iterative fine-tuning process, the model successfully balances the requirements of factual fidelity and emotional tone, producing summaries that are not only accurate but also aligned with user-specified emotional guidelines. Quantitative evaluations using ROUGE and BLEU scores, alongside sentiment analysis and diversity metrics, confirm that sentiment control enhances the linguistic richness and positivity of the generated summaries. Furthermore, the experiments reveal the trade-offs involved in maintaining content fidelity while steering tone, highlighting the importance of prompt design in managing these competing priorities.