CovidEnvelope: A Fast Automated Approach to Diagnose COVID-19 from Cough Signals

This article has been Reviewed by the following groups

Read the full article

Abstract

The COVID-19 pandemic has a devastating impact on the health and well-being of global population. Cough audio signals classification showed potential as a screening approach for diagnosing people, infected with COVID-19. Recent approaches need costly deep learning algorithms or sophisticated methods to extract informative features from cough audio signals. In this paper, we propose a low-cost envelope approach, called CovidEnvelope, which can classify COVID-19 positive and negative cases from raw data by avoiding above disadvantages. This automated approach can pre-process cough audio signals by filter-out back-ground noises, generate an envelope around the audio signal, and finally provide outcomes by computing area enclosed by the envelope. It has been seen that reliable datasets are also important for achieving high performance. Our approach proves that human verbal confirmation is not a reliable source of information. Finally, the approach reaches highest sensitivity, specificity, accuracy, and AUC of 0.92, 0.87, 0.89, and 0.89 respectively. The automatic approach only takes 1.8 to 3.9 minutes to compute these performances. Overall, this approach is fast and sensitive to diagnose the people living with COVID-19, regardless of having COVID-19 related symptoms or not, and thus have vast applicability in human well-being by designing HCI devices incorporating this approach.

Article activity feed

  1. SciScore for 10.1101/2021.04.16.21255630: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Ethicsnot detected.
    Sex as a biological variablenot detected.
    RandomizationCorrect audio signal was selected by comparing the sum of variances of the recorded signals, and then filtered using a three-point moving average filter to remove random fluctuations between samples of the audio signal as illustrated in Figure 1(b).
    Blindingnot detected.
    Power Analysisnot detected.

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We found bar graphs of continuous data. We recommend replacing bar graphs with more informative graphics, as many different datasets can lead to the same bar graph. The actual data may suggest different conclusions from the summary statistics. For more information, please see Weissgerber et al (2015).


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.