MulMed: Addressing Multiple Medical Tasks Utilizing Large Language Models
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
The proliferation of large-scale language models, such as ChatGPT, has underscored the urgent requirement to develop Language Models in Medicine (LLMs) to mitigate the burden on healthcare resources. This work introduces MulMed, a model that prioritizes multitasking capabilities in medical domains. MulMed aims to summarize complex medical texts, address patient inquiries, engage in medical question-answering dialogues, demonstrate cross-lingual proficiency, and offer comprehensive medical knowledge coverage. Its key contributions include a two-step fine-tuned modeling framework that enables the model to perform multi-task functions like medical text summarization and Q&A in both English and Chinese, demonstrating excellent generalization abilities on benchmark test sets. The model also exhibits human empathy in doctor-patient consultations, and its fine-tuning process and data are openly available to promote future research in cross-lingual medical models. Additionally, a medical ethics framework is proposed to aid in evaluating the feasibility of medical model applications.