A holistic comparison between deep learning techniques to determine Covid-19 patients utilizing chest X-Ray images
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (ScreenIT)
Abstract
Novel coronavirus likewise called COVID-19 began in Wuhan, China in December 2019 and has now outspread over the world. Around 63 millions of people currently got influenced by novel coronavirus and it causes around 1,500,000 deaths. There are just about 600,000 individuals contaminated by COVID-19 in Bangladesh too. As it is an exceptionally new pandemic infection, its diagnosis is challenging for the medical community. In regular cases, it is hard for lower incoming countries to test cases easily. RT-PCR test is the most generally utilized analysis framework for COVID-19 patient detection. However, by utilizing X-ray image based programmed recognition can diminish the expense and testing time. So according to handling this test, it is important to program and effective recognition to forestall transmission to others. In this paper, author attempts to distinguish COVID-19 patients by chest X-ray images. Author executes various pre-trained deep learning models on the dataset such as Base-CNN, ResNet-50, DenseNet-121 and EfficientNet-B4. All the outcomes are compared to determine a suitable model for COVID-19 detection using chest X-ray images. Author also evaluates the results by AUC, where EfficientNet-B4 has 0.997 AUC, ResNet-50 has 0.967 AUC, DenseNet-121 has 0.874 AUC and the Base-CNN model has 0.762 AUC individually. The EfficientNet-B4 has achieved 98.86% accuracy.
Article activity feed
-
-
SciScore for 10.1101/2020.07.08.20148924: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources ResNet has about 3.57% less error than VGG Net [5]. ResNetsuggested: (RESNET, RRID:SCR_002121)In this analysis, we utilized pre-trained ResNet152 architecture. E. EficientNet: EfficientNet has been created and presented by Mingxing Tan, Staff Software Engineer at Google. Googlesuggested: (Google, RRID:SCR_017097)Results from OddPub: Thank you for sharing your code and data.
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.Results from TrialIdentifier: No …
SciScore for 10.1101/2020.07.08.20148924: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
Software and Algorithms Sentences Resources ResNet has about 3.57% less error than VGG Net [5]. ResNetsuggested: (RESNET, RRID:SCR_002121)In this analysis, we utilized pre-trained ResNet152 architecture. E. EficientNet: EfficientNet has been created and presented by Mingxing Tan, Staff Software Engineer at Google. Googlesuggested: (Google, RRID:SCR_017097)Results from OddPub: Thank you for sharing your code and data.
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: We did not find any issues relating to colormaps.
Results from rtransparent:- Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
- Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
- No protocol registration statement was detected.
-