The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk Screening by Eye-region Manifestations

This article has been Reviewed by the following groups

Read the full article

Abstract

Background

The worldwide surge in coronavirus cases has led to the COVID-19 testing demand surge. Rapid, accurate, and cost-effective COVID-19 screening tests working at a population level are in imperative demand globally.

Methods

Based on the eye symptoms of COVID-19, we developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras. The convolutional neural networks (CNNs)-based model was trained on these eye images to complete binary classification task of identifying the COVID-19 cases. The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1. The application programming interface was open access.

Findings

The multicenter study included 2436 pictures corresponding to 657 subjects (155 COVID-19 infection, 23·6%) in development dataset (train and validation) and 2138 pictures corresponding to 478 subjects (64 COVID-19 infections, 13·4%) in test dataset. The image-level performance of COVID-19 prescreening model in the China-Spain multicenter study achieved an AUC of 0·913 (95% CI, 0·898-0·927), with a sensitivity of 0·695 (95% CI, 0·643-0·748), a specificity of 0·904 (95% CI, 0·891-0·919), an accuracy of 0·875(0·861-0·889), and a F1 of 0·611(0·568-0·655).

Interpretation

The CNN-based model for COVID-19 rapid prescreening has reliable specificity and sensitivity. This system provides a low-cost, fully self-performed, non-invasive, real-time feedback solution for continuous surveillance and large-scale rapid prescreening for COVID-19.

Funding

This project is supported by Aimomics (Shanghai) Intelligent

Article activity feed

  1. SciScore for 10.1101/2021.09.24.21263766: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    NIH rigor criteria are not applicable to paper type.

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    There are some limitations in this study. First, the participants were mostly collected from East Asia (China) and some from Spain. Therefore, a larger multicenter study covering more patients with diverse races and more control groups is necessary before the model could be used globally. More data are being collected and will be used in the further study. Secondary, there might be potential confounding factors such as comorbidities influencing eye symptoms. We did not collect the comorbidities of our participants. However, the control and positive patients were randomly selected from the population, which could balance the baseline demographics between groups. Third, some of the demographic information (e.g., gender and ages) are not collected during image acquisition. Fourth, our model was based on the eye symptoms, however, it cannot determine COVID-19-related eye disease. The pathological significance of extracted features from COVID-19 patients should be carefully interpreted and re-verified by the ophthalmologist. Further clinical studies are needed to test the performance and provide a deeper understanding of our findings of the ocular surface feature-based classification network.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    Results from scite Reference Check: We found no unreliable references.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.