The US COVID-19 Trends and Impact Survey: Continuous real-time measurement of COVID-19 symptoms, risks, protective behaviors, testing, and vaccination

This article has been Reviewed by the following groups

Read the full article

Abstract

The US COVID-19 Trends and Impact Survey (CTIS) has operated continuously since April 6, 2020, collecting over 20 million responses. As the largest public health survey conducted in the United States to date, CTIS was designed to facilitate detailed demographic and geographic analyses, track trends over time, and accommodate rapid revision to address emerging priorities. Using examples of CTIS results illuminating trends in symptoms, risks, mitigating behaviors, testing, and vaccination in relation to evolving high-priority policy questions over 12 mo of the pandemic, we illustrate the value of online surveys for tracking patterns and trends in COVID outcomes as an adjunct to official reporting, and showcase unique insights that would not be visible through traditional public health reporting.

Article activity feed

  1. SciScore for 10.1101/2021.07.24.21261076: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    EthicsIRB: The study was approved by the Carnegie Mellon University Institutional Review Board, under protocol STUDY2020_00000162.
    Sex as a biological variablenot detected.
    RandomizationFacebook uses stratified random sampling within US states to randomly select a sample of its users to see the survey invitation at the top of their News Feed.
    Blindingnot detected.
    Power Analysisnot detected.

    Table 2: Resources

    No key resources detected.


    Results from OddPub: Thank you for sharing your code and data.


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Several limitations are important to note. First, because the survey uses Facebook active users as its sampling frame and because participation in the survey is strictly voluntary, respondents may not be fully representative of the U.S. population despite incorporation of survey weights, which adjust for non-response and coverage biases based on a limited number of covariates. Comparison to the American Community Survey indicates that our sample over-represents respondents who are college-educated. Research users of the survey microdata can use additional demographic or other survey variables to construct improved post-stratification adjustments to correct this for their purposes. However, any non-response biases not accounted for by Facebook’s non-response weights would be much more difficult to correct. Additionally, many of the outcome measures related to COVID-19 are based on self-reports, which may diverge from more objective measures due to recall bias, social desirability bias, and other sources of survey bias and measurement error. On the other hand, broad comparisons of indicators such as cumulative COVID-19 diagnoses suggest that measurement of key COVID-19 outcomes are relatively robust to response biases that may be present in the sample. Ultimately, the value of such a large-scale survey is not in accuracy afforded by its sample size, since survey biases persist no matter the size of the survey; smaller surveys more carefully constructed to reduce sampling biases...

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    Results from scite Reference Check: We found no unreliable references.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.