Microbial Named Entity Recognition and Normalisation for AI-assisted Literature Review and Meta-Analysis
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Motivation
Manual curation of biomedical literature is slow and error-prone and while large language models (LLMs) trained on general texts have shown to be useful for text summarisation, these methods lack the domain-specific expertise required to perform this task accurately. Here we describe the creation of the first microbiome-specific text corpus, use this to train deep learning algorithms for named-entity recognition (NER) and normalisation (NEN), and demonstrate their use to meta-analyse microbiome literature.
Methods
We developed an automated pipeline to annotate all mentions of bacteria, archaea, and fungi in 1,410 full-text microbiome articles. We manually annotated (gold-standard) a separate test set of 288 documents. We trained different transformer-based language models for microbiome recognition and normalisation to taxonomic identifiers and evaluate their performance using the precision, recall, F1-score, and accuracy on the test set. The best models were used to automatically annotate all available Open Access, full-text microbiome articles (n=6,927) and identify taxa that are significantly overrepresented across 14 domains.
Results
The training and validation set contained a total of 90,150 annotations (both long form and abbreviations). Using the gold-standard test set, with an inter-annotator agreement rate of 99.52% for NER and 88.31% for NEN, the trained models were evaluated and our fine-tuned BioBERT model achieved an F1-score of 96% for NER surpassing a rule- and dictionary-based annotation pipeline (94%). For NEN the accuracy obtained by the deep learning models greatly surpassed that of the pipeline (91% vs 69%). Evaluated across the entire available literature, our models annotate an entire full-text document in only 7 seconds.
Conclusion
Our algorithms have near perfect precision and greatly speed up the process of annotating microbes in full-text articles. We demonstrated the capabilities of these methods by analysing the entire available literature and describe the taxa associated with each of the domains in our meta-analysis, and exemplify how these methods can be integrated into literature review workflows improving both the speed and accuracy of results.
Availability
All codes and data for automatic annotation, model training, and generation of taxonomic trees visualising the data will be made available following peer review with instructions on how to deploy the model on new texts from https://github.com/omicsNLP/microbiomeNER .