Evaluating Language Models for Biomedical Fact-Checking: A Benchmark Dataset for Cancer Variant Interpretation Verification
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Accurate interpretation of genomic variants is critical for precision oncology but remains slow and dependent on specialized expertise. Public knowledgebases such as the Clinical Interpretation of Variants in Cancer (CIViC) help by curating literature-backed variant interpretations in a structured form, yet verification and review have become major bottlenecks. To address this, we developed CIViC-Fact, a benchmark dataset and pipeline for testing automated systems that verify the accuracy of cancer variant claims. CIViC-Fact links structured claims to sentence-level supporting or refuting evidence from full-text articles, and includes expert annotations and explanations. We evaluated multiple language models. Proprietary models performed well without training, but a smaller open-source model, fine-tuned on CIViC-Fact, achieved the highest accuracy (89%). Applying our fact-checking pipeline to real CIViC entries showed that reviewing less than 20% of content, focusing on flagged entries, would be sufficient to catch over half of all errors. This AI-assisted triage greatly accelerates the review process without replacing or reducing expert insight, ensuring that existing careful oversight remains in place while curators can work more efficiently. CIViC-Fact provides a realistic, high-consequence framework for biomedical fact-checking and a path toward more rigorous and efficient knowledgebase curation.