Enhancing Privacy-Preserving Deployable Large Language Models for Perioperative Complication Detection: A Targeted Strategy with LoRA Fine-tuning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Perioperative complications represent a major global health concern affecting millions of surgical patients annually, yet manual detection methods suffer from significant under-reporting (27%) and misclassification rates. Clinical deployment of large language models (LLMs) for automated complication detection faces substantial barriers including data sovereignty concerns, computational costs, and performance limitations in locally deployable models. Here we show that targeted prompt engineering combined with Low-Rank Adaptation (LoRA) fine-tuning can transform smaller open-source large language models (LLMs) into expert-level diagnostic tools for perioperative complication detection. We conducted a dual-center validation study and developed a comprehensive framework enabling simultaneous identification and severity grading of 22 distinct perioperative complications. Initial evaluations revealed that state-of-the-art models consistently outperformed human experts, particularly reasoning models. AI models maintained consistent performance across varying document complexity, whereas human performance declined with increasing clinical documentation length. Our targeted strategy, which decomposed comprehensive detection into focused single-complication assessments, significantly improved smaller model capabilities. Combined with LoRA fine-tuning, the 4B parameter model’s F1 score increased from 0.18 to 0.55, approaching human expert performance (F1=0.526), while the 8B model achieved F1>0.61. These results demonstrate that optimized smaller models can achieve expert-level diagnostic accuracy while enabling local deployment with preserved data sovereignty, offering a practical solution for resource-limited healthcare institutions to implement automated perioperative complication screening.