Use of a Digital Assistant to Report COVID-19 Rapid Antigen Self-test Results to Health Departments in 6 US Communities

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Widespread distribution of rapid antigen tests is integral to the US strategy to address COVID-19; however, it is estimated that few rapid antigen test results are reported to local departments of health.

Objective

To characterize how often individuals in 6 communities throughout the United States used a digital assistant to log rapid antigen test results and report them to their local departments of health.

Design, Setting, and Participants

This prospective cohort study is based on anonymously collected data from the beneficiaries of the Say Yes! Covid Test program, which distributed more than 3 000 000 rapid antigen tests at no cost to residents of 6 communities (Louisville, Kentucky; Indianapolis, Indiana; Fulton County, Georgia; O’ahu, Hawaii; Ann Arbor and Ypsilanti, Michigan; and Chattanooga, Tennessee) between April and October 2021. A descriptive evaluation of beneficiary use of a digital assistant for logging and reporting their rapid antigen test results was performed.

Interventions

Widespread community distribution of rapid antigen tests.

Main Outcomes and Measures

Number and proportion of tests logged and reported to the local department of health through the digital assistant.

Results

A total of 313 000 test kits were distributed, including 178 785 test kits that were ordered using the digital assistant. Among all distributed kits, 14 398 households (4.6%) used the digital assistant, but beneficiaries reported three-quarters of their rapid antigen test results to their state public health departments (30 965 tests reported of 41 465 total test results [75.0%]). The reporting behavior varied by community and was significantly higher among communities that were incentivized for reporting test results vs those that were not incentivized or partially incentivized (90.5% [95% CI, 89.9%-91.2%] vs 70.5%; [95% CI, 70.0%-71.0%]). In all communities, positive tests were less frequently reported than negative tests (60.4% [95% CI, 58.1%-62.8%] vs 75.5% [95% CI, 75.1%-76.0%]).

Conclusions and Relevance

These results suggest that application-based reporting with incentives may be associated with increased reporting of rapid tests for COVID-19. However, increasing the adoption of the digital assistant may be a critical first step.

Article activity feed

  1. SciScore for 10.1101/2022.03.31.22273242: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    EthicsIRB: This study received non-research determination by the University of Massachusetts Chan Medical School Institutional Review Board.
    Sex as a biological variablenot detected.
    Randomizationnot detected.
    Blindingnot detected.
    Power Analysisnot detected.

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Strengths and Limitations: This report offers a unique look into COVID-19 test reporting behaviors of nearly fifteen thousand digital assistant users throughout the United States. However, there are limitations to this data. The number of digital assistant users is quite small compared to all intervention participants, and with the current data, we are unable to assess the demographics or socioeconomic status of digital assistant users, nor how digital assistant users compare to non-users. Conclusion: Three-quarters of those who used the digital assistant for testing reported their results to the DoH, indicating that app-based reporting may be an effective way to increase reporting of rapid tests for COVID-19. However, the relatively low voluntary uptake of the digital assistant indicates that user-centered strategies may be necessary to maximize digital assistant usage.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    Results from scite Reference Check: We found no unreliable references.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.