A Comparative Investigation of Zero-shot Prompting and Fine-tuning for Clinical Note Summarization
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background This study aims to use large language models (LLMs) to develop and evaluate techniques for automated discharge summary generation, with the long term goal of reducing clinician documentation burden, improving workflow efficiency, and enhancing the accuracy and completeness of patient records. Methods We used structured and unstructured inputs from MIMIC-IV—including chief complaints, ICD-9/10 codes, radiology reports, and non-target discharge summary sections, to generate two targets: BHC and DI. Reference sections were obtained from the Discharge Me! shared task. We evaluated instruction-tuned LLMs (Phi-4, LLaMA-3.1-8B, Mistral-7B, Gemma-2-9B, and Gemma-3-12B) under supervised fine-tuning and zero-shot prompting under full and reduced-input settings. Performance was measured using eight metrics (BLEU-4, ROUGE-1/2/L, BERTScore, METEOR, AlignScore, and MEDCON), with additional analyses of input truncation and output length effects. Results Fine-tuned models with full input outperformed others across all evaluation settings. Gemma-2 achieved the highest overall score (0.307), closely followed by LLaMA-3.1 (0.306). zero-shot models performed substantially worse than their fine-tuned counterparts, with the highest zero-shot score (0.21) obtained by Gemma-3 using both full and truncated inputs. Truncating the input reduced the average context length by approximately 50% while yielding competitive performance, resulting in less than a 2% degradation under fine-tuning and nearly identical performance in the zero-shot setting. Analysis of generation length revealed that performance declined beyond a certain character threshold. Conclusion Fine-tuning large language models with full input outperformed other approaches. Input truncation reduced context length and computational cost with minimal impact on generation quality. We observed occasional generation artifacts, such as repeated phrases in fine-tuned outputs. Restricting our analysis to instruction-tuned models (<= 14B parameters), we observed competitive performance across experimental settings under comparable hyperparameter configurations.