Information retrieval in an infodemic: the case of COVID-19 publications
This article has been Reviewed by the following groups
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
- Evaluated articles (ScreenIT)
Abstract
The COVID-19 pandemic has led to an exponential surge and an enormous amount of published literature, both accurate and inaccurate, a term usually coined as an infodemic. In the context of searching for COVID-19 related scientific literature, we present an information retrieval methodology for effectively finding relevant publications for different information needs. Our multi-stage information retrieval architecture combines probabilistic weighting models and re-ranking algorithms based on neural masked language models. The methodology was evaluated in the context of the TREC-COVID challenge, achieving competitive results with the top ranking teams participating in the competition. Particularly, the ranking combination of bag-of-words and language models significantly outperformed a BM25-based baseline model (16 percentage points for the NDCG@20 metric), correctly retrieving more than 16 out of the top 20 documents retrieved. The proposed pipeline could thus support the effective search and discovery of relevant information in the case of an infodemic.
Article activity feed
-
SciScore for 10.1101/2021.01.29.428847: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources As shown in Figure 1, this is a large and dynamically growing semi-structured dataset from various sources like PubMed, PubMed Central, WHO and preprint servers like bioRxiv, medRxiv, and arXiv. PubMedsuggested: (PubMed, RRID:SCR_004846)bioRxivsuggested: (bioRxiv, RRID:SCR_003933)arXivsuggested: (arXiv, RRID:SCR_006500)Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this …SciScore for 10.1101/2021.01.29.428847: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources As shown in Figure 1, this is a large and dynamically growing semi-structured dataset from various sources like PubMed, PubMed Central, WHO and preprint servers like bioRxiv, medRxiv, and arXiv. PubMedsuggested: (PubMed, RRID:SCR_004846)bioRxivsuggested: (bioRxiv, RRID:SCR_003933)arXivsuggested: (arXiv, RRID:SCR_006500)Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: We did not find any issues relating to colormaps.
Results from rtransparent:- Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
- No funding statement was detected.
- No protocol registration statement was detected.
-
