Optimizing discharge summary generation: fine-tuning LLMs by DoRA and iterative self-evaluation for enhanced medical text generation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background The generation of discharge summaries in healthcare is crucial but labor-intensive. Purpose This study introduces a novel framework that enhances the automation of this process using advanced natural language processing (NLP) techniques. Methods We combine the Decomposed Low Rank Adaptation (DoRA) fine-tuning method with a unique self-evaluation mechanism to improve large language models (LLMs) tailored for medical text generation. Our methodology leverages DoRA to efficiently adapt pre-trained LLMs to the specialized medical domain, demonstrating superior performance over traditional methods like LoRA and QLoRA across multiple metrics. We further incorporate a self-evaluation mechanism inspired by cognitive psychology principles, enabling iterative refinement of model outputs to enhance accuracy and completeness. Results This approach is rigorously compared against popular few-shot prompting and Chain of Thought (CoT) methods. Extensive experiments yield significant improvements in both quantitative (ROUGE, BLEU, BERT score) and qualitative (accuracy, completeness, relevance, consistency, and utility) assessments compared to baseline techniques. Conclusions Our results demonstrate substantial enhancements in the quality and consistency of generated discharge summaries while markedly reducing the time required for their creation. This research underscores the potential of AI-driven tools in healthcare documentation, significantly reducing the time required for generating discharge summaries while improving their quality and consistency. The findings indicate promising prospects for automating medical documentation that adheres to high standards of accuracy and relevance.