Artificial Intelligence for COVID-19 Detection in Medical Imaging—Diagnostic Measures and Wasting—A Systematic Umbrella Review
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (ScreenIT)
Abstract
The COVID-19 pandemic has sparked a barrage of primary research and reviews. We investigated the publishing process, time and resource wasting, and assessed the methodological quality of the reviews on artificial intelligence techniques to diagnose COVID-19 in medical images. We searched nine databases from inception until 1 September 2020. Two independent reviewers did all steps of identification, extraction, and methodological credibility assessment of records. Out of 725 records, 22 reviews analysing 165 primary studies met the inclusion criteria. This review covers 174,277 participants in total, including 19,170 diagnosed with COVID-19. The methodological credibility of all eligible studies was rated as critically low: 95% of papers had significant flaws in reporting quality. On average, 7.24 (range: 0–45) new papers were included in each subsequent review, and 14% of studies did not include any new paper into consideration. Almost three-quarters of the studies included less than 10% of available studies. More than half of the reviews did not comment on the previously published reviews at all. Much wasting time and resources could be avoided if referring to previous reviews and following methodological guidelines. Such information chaos is alarming. It is high time to draw conclusions from what we experienced and prepare for future pandemics.
Article activity feed
-
-
SciScore for 10.1101/2021.05.03.21256565: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
Ethics not detected. Sex as a biological variable not detected. Randomization not detected. Blinding not detected. Power Analysis not detected. Table 2: Resources
Antibodies Sentences Resources We excluded these primary studies that used reference standards other than assay types (NAATs, antigen tests, and antibody tests) from nasopharyngeal or oropharyngeal swab samples, nasal aspirate, nasal wash or saliva, sputum or tracheal aspirate, or bronchoalveolar lavage (BAL) [16, 34]. antigen tests,suggested: NoneSoftware and Algorithms Sentences Resources 2.2 Search methods: In order to determine whether there are any eligible papers, we conducted a pre-search in the middle of August 2020 via Google Scholar by … SciScore for 10.1101/2021.05.03.21256565: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
Ethics not detected. Sex as a biological variable not detected. Randomization not detected. Blinding not detected. Power Analysis not detected. Table 2: Resources
Antibodies Sentences Resources We excluded these primary studies that used reference standards other than assay types (NAATs, antigen tests, and antibody tests) from nasopharyngeal or oropharyngeal swab samples, nasal aspirate, nasal wash or saliva, sputum or tracheal aspirate, or bronchoalveolar lavage (BAL) [16, 34]. antigen tests,suggested: NoneSoftware and Algorithms Sentences Resources 2.2 Search methods: In order to determine whether there are any eligible papers, we conducted a pre-search in the middle of August 2020 via Google Scholar by browsing. Google Scholarsuggested: (Google Scholar, RRID:SCR_008878)We searched seven article databases (MEDLINE, EMBASE, Web of Science, Scopus, dblp, Cochrane Library, IEEE Xplore) and two preprint databases (arXiv, OSF Preprints) from inception to 01 September 2020 using predefined search strategies. EMBASEsuggested: (EMBASE, RRID:SCR_001650)Cochrane Librarysuggested: (Cochrane Library, RRID:SCR_013000)In developing the search strategy for MEDLINE, we combined the Medical Subject Headings (MeSH) and full-text words. MEDLINEsuggested: (MEDLINE, RRID:SCR_002185)MeSHsuggested: (MeSH, RRID:SCR_004750)Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:4.1 Study Strengths & Limitations: Our umbrella review has the following strengths. First, the search strategy was comprehensive based on adequate inclusion criteria related to research question, and spanned across a wide selection of existing data sources: papers and preprints. This selection was further expanded by searching the references of included papers to identify new works. Noteworthy, the searches were not limited in terms of format or language. The process of our review was rigorous as the study was proceeded by the publication of protocol. We used the most up-to-date and applicable tools to assess credibility and quality of reporting – AMSTAR 2 and PRISMA with extension for DTA, respectively. Nevertheless, these instruments have been designed for reviews in the fields of medicine and health sciences, where the formulation of the research question is structured, the methodology is validated and the quantitative synthesis of results is more popular. There are also other limitations associated with this study. Although our exhaustive and sensitive search covered multiple aspects, some studies might have still been missed. We did not search for Chinese data sources that could include many valuable papers. Secondly, it must be noted that the vast majority of the included studies focused on a broader context than purely diagnosing COVID-19 from medical images. In this study, we have investigated wasting among the reviews. We based on the date of publishing of the last i...
Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: We did not find any issues relating to colormaps.
Results from rtransparent:- Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
- Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
- No protocol registration statement was detected.
Results from scite Reference Check: We found no unreliable references.
-