A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images

This article has been Reviewed by the following groups

Read the full article

Abstract

Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images for the detection of COVID-19. Traditional methods for CT scan segmentation exploit a supervised learning paradigm, so they (a) require large volumes of data for their training, and (b) assume fixed (static) network weights once the training procedure has been completed. Recently, to overcome these difficulties, few-shot learning (FSL) has been introduced as a general concept of network model training using a very small amount of samples. In this paper, we explore the efficacy of few-shot learning in U-Net architectures, allowing for a dynamic fine-tuning of the network weights as new few samples are being fed into the U-Net. Experimental results indicate improvement in the segmentation accuracy of identifying COVID-19 infected regions. In particular, using 4-fold cross-validation results of the different classifiers, we observed an improvement of 5.388 ± 3.046% for all test data regarding the IoU metric and a similar increment of 5.394 ± 3.015% for the F1 score. Moreover, the statistical significance of the improvement obtained using our proposed few-shot U-Net architecture compared with the traditional U-Net model was confirmed by applying the Kruskal-Wallis test (p-value = 0.026).

Article activity feed

  1. SciScore for 10.1101/2020.05.08.20094664: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    NIH rigor criteria are not applicable to paper type.

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Implementation and limitations of mitigation strategies: Prior to any implementation approach, we should consider the limitations of the problem at hand. In deep learning approaches, there are two main concerns: (i) data availability and (ii) data imbalance, which both impact the classification model selection and topology’s complexity. The first step was a training data balancing strategy, involving under-sampling of the majority class [26]. At first glance, approximately 400 images contain no positive annotations. These were excluded from the training set. The remaining 300, approximately, images had various ratios ranging from 0.1% to 20% of positive annotations to image total pixels. Man-made annotations are prone to errors [27]. It is extremely difficult, rather impossible for most cases, to be able to distinguish if a specific pixel, on a boundary area, between two classes, corresponds to either of them. Towards that direction, we could utilize the networks’ capabilities to generalize and handling the noise, given that the wrong annotations are limited. Other approaches considered where the implementation of different performance metrics during the training process and building models of limited complexity. C. Experimental results: Experimental results consider both the detection capabilities, employing multiple classification related performance metrics and the computational average time, required by a trained model to fully annotate a CT slice. Fig. 4 provides the ave...

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.