Towards Metacognitive Clinical Reasoning: Benchmarking MD-PIE Against State-of-the-Art LLMs in Medical Decision-Making

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The ability of large language models (LLMs) to perform clinical reasoning and cognitive tasks within medicine remains a critical measure of their overall capabilities in decision-making, with significant implications for patient outcomes and healthcare efficiency. Current AI models often face limitations in real-world clinical environments, including variability in performance, a lack of domain-specific knowledge, and black-box reasoning processes. In this study, we introduce a novel PIE framework, named MD-PIE, which emulates cognitive and reasoning abilities in medical reasoning and decision-making. We benchmark our framework and baseline methods both quantitatively and qualitatively using state-of-the-art LLMs, comparing them against OpenAI’s o1, Gemini 2.0 Flash Thinking, and DeepSeek V3 across diverse benchmarks. Our results demonstrate that MD-PIE surpasses existing models in differential diagnosis and reasoning accuracy across diverse medical benchmarks. This study underscores its potential to improve clinical decision-making through adaptive and collaborative design. Future research should focus on larger benchmarks and real-world validation to confirm its reliability and effectiveness in varied clinical scenarios.

Article activity feed