COVID-19: Affect recognition through voice analysis during the winter lockdown in Scotland

This article has been Reviewed by the following groups

Read the full article

Abstract

The COVID-19 pandemic has led to unprecedented restrictions in people’s lifestyle which have affected their psychological wellbeing. In this context, this paper investigates the use of social signal processing techniques for remote assessment of emotions. It presents a machine learning method for affect recognition applied to recordings taken during the COVID-19 winter lockdown in Scotland (UK). This method is exclusively based on acoustic features extracted from voice recordings collected through home and mobile devices (i.e. phones, tablets), thus providing insight into the feasibility of monitoring people’s psychological wellbeing remotely, automatically and at scale. The proposed model is able to predict affect with a concordance correlation coefficient of 0.4230 (using Random Forest) and 0.3354 (using Decision Trees) for arousal and valence respectively.

Clinical relevance

In 2018/2019, 12% and 14% of Scottish adults reported depression and anxiety symptoms. Remote emotion recognition through home devices would support the detection of these difficulties, which are often underdiagnosed and, if untreated, may lead to temporal or chronic disability.

Article activity feed

  1. SciScore for 10.1101/2021.05.05.21256668: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    NIH rigor criteria are not applicable to paper type.

    Table 2: Resources

    Software and Algorithms
    SentencesResources
    The regression methods were implemented in MATLAB.
    MATLAB
    suggested: (MATLAB, RRID:SCR_001622)

    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Another relevant limitation of previous studies is the size of the available dataset. There are 10 participants in the EmoDB [29], 4 in the SAVEE dataset [30], 6 in the EMOVO dataset[31], 10 in the vlogger dataset [42]), 23 in the SEMAINE [34] and 46 in the RECOLA [35]. Our study, on the other hand, contains speech of 109 participants and 3,242 segments. Finally, emotion recognition is often limited by the inherent subjectivity in having emotions labelled by humans who perceive affect from audio, visual and linguistic information [43]. In this study, this intermediate step was removed as the affect is self-reported through the affective slider. In addition, the affective scores were validated by their statistically significant association with the HADS questionnaire (p < 0.01, ρArousal.= −0.62 and ρValence = −0.71), a long standing tool to screen for depression and anxiety.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    Results from scite Reference Check: We found no unreliable references.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.