Predicting personal protective equipment use, trauma symptoms, and physical symptoms in the USA during the early weeks of the COVID-19 lockdown (April 9–18, 2020)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

No abstract available

Article activity feed

  1. SciScore for 10.1101/2020.07.27.20162057: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board StatementIRB: Participants: This project was approved by the Bowling Green State University Institutional Review Board on March 30, 2020 (#1562479-4).
    Consent: ” Interested participants were then directed to the informed consent form which was linked to the survey.
    Randomizationnot detected.
    Blindingnot detected.
    Power Analysisnot detected.
    Sex as a biological variableThese statistics indicate that relative to census data, there was a higher proportion of males, Black/African Americans, and persons with higher educational attainment in this sample (retrieved from www.census.gov).

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.

  2. SciScore for 10.1101/2020.07.27.20162057: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board StatementMethods Participants 9 This project was approved by the Bowling Green State University Institutional Review Board on March 30, 2020 (#1562479-4).Randomizationnot detected.Blindingnot detected.Power Analysisnot detected.Sex as a biological variableThese statistics indicate that relative to census data, there was a higher proportion of males, Black/African Americans, and persons with higher educational attainment in this sample (retrieved from www.census.gov).

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:

    Limitations The survey was completed at a single point in time which limits causal inference. We are proposing a temporal sequence where COVID-19 objective risk exposure, demographic and health risk factors, and individual differences predict PPE use, post-trauma symptoms, and physical symptoms. However, the temporal sequence could have a different form. It could be that higher levels of distress, could cause a person to perceive themselves to be more vulnerable, more intolerant of uncertainty, and less mindful. A time series approach would help determine the plausibility of this argument. A second set of limitations is related to the participant selection process and the use of MTurk workers. In terms of the former, it may be that persons who were experiencing higher levels of distress were more apt to participate in the study. As a result, it cannot be determined how well the descriptive statistics and regression findings generalize to a broader USA 20 population. In relation to the latter limitation, there have been extensive analyses of the representativeness and characteristics of MTurk samples relative to other participant recruitment strategies (see Chandler & Shapiro, 2016). MTurk samples are more representative than college students and convenience samples drawn from small university communities, but less diverse in some ways than national probability samples. However, Chandler and Shapiro (2016) noted that the national probability samples are biased in that they rely on telephone methods which skews their results to older and more conservative participants. Additionally, as was evident in our sample, MTurk workers tend to have higher educational attainment and are more likely to be male. Finally, MTurk workers have been demonstrated to report higher levels of distress relative to other types of samples (Chandler & Shapiro, 2016). There are strengths to the use of an MTurk sample as well. Collection of data from a heterogenous sample allowed us to be more certain that effects did not pertain to a single geographic location, occupation, or type of participant. Although some research has shown that MTurk participants display higher rates of anxiety and depression, this phenomenon is at least partially combatted through screening for response quality (Ophir et al., 2019). Additionally, this feature of our data may have helped us avoid range restriction in our sample. Finally, although researchers have identified various threats to validity that may be possible in research conducted on a crowdsourced platform, such as subject inattentiveness, demand characteristics, and repeated participation, the present study utilized best practices to mitigate such threats. Specifically, the study utilized attention checks, data screening, and avoiding signaling cues (Cheung et al., 2017). 21


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore is not a substitute for expert review. SciScore checks for the presence and correctness of RRIDs (research resource identifiers) in the manuscript, and detects sentences that appear to be missing RRIDs. SciScore also checks to make sure that rigor criteria are addressed by authors. It does this by detecting sentences that discuss criteria such as blinding or power analysis. SciScore does not guarantee that the rigor criteria that it detects are appropriate for the particular study. Instead it assists authors, editors, and reviewers by drawing attention to sections of the manuscript that contain or should contain various rigor criteria and key resources. For details on the results shown here, including references cited, please follow this link.

  3. SciScore for 10.1101/2020.07.27.20162057: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board StatementMethods Participants 9 This project was approved by the Bowling Green State University Institutional Review Board on March 30, 2020 (#1562479-4).Randomizationnot detected.Blindingnot detected.Power Analysisnot detected.Sex as a biological variableThese statistics indicate that relative to census data, there was a higher proportion of males, Black/African Americans, and persons with higher educational attainment in this sample (retrieved from www.census.gov).

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:

    Limitations The survey was completed at a single point in time which limits causal inference. We are proposing a temporal sequence where COVID-19 objective risk exposure, demographic and health risk factors, and individual differences predict PPE use, post-trauma symptoms, and physical symptoms. However, the temporal sequence could have a different form. It could be that higher levels of distress, could cause a person to perceive themselves to be more vulnerable, more intolerant of uncertainty, and less mindful. A time series approach would help determine the plausibility of this argument. A second set of limitations is related to the participant selection process and the use of MTurk workers. In terms of the former, it may be that persons who were experiencing higher levels of distress were more apt to participate in the study. As a result, it cannot be determined how well the descriptive statistics and regression findings generalize to a broader USA 20 population. In relation to the latter limitation, there have been extensive analyses of the representativeness and characteristics of MTurk samples relative to other participant recruitment strategies (see Chandler & Shapiro, 2016). MTurk samples are more representative than college students and convenience samples drawn from small university communities, but less diverse in some ways than national probability samples. However, Chandler and Shapiro (2016) noted that the national probability samples are biased in that they rely on telephone methods which skews their results to older and more conservative participants. Additionally, as was evident in our sample, MTurk workers tend to have higher educational attainment and are more likely to be male. Finally, MTurk workers have been demonstrated to report higher levels of distress relative to other types of samples (Chandler & Shapiro, 2016). There are strengths to the use of an MTurk sample as well. Collection of data from a heterogenous sample allowed us to be more certain that effects did not pertain to a single geographic location, occupation, or type of participant. Although some research has shown that MTurk participants display higher rates of anxiety and depression, this phenomenon is at least partially combatted through screening for response quality (Ophir et al., 2019). Additionally, this feature of our data may have helped us avoid range restriction in our sample. Finally, although researchers have identified various threats to validity that may be possible in research conducted on a crowdsourced platform, such as subject inattentiveness, demand characteristics, and repeated participation, the present study utilized best practices to mitigate such threats. Specifically, the study utilized attention checks, data screening, and avoiding signaling cues (Cheung et al., 2017). 21


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore is not a substitute for expert review. SciScore checks for the presence and correctness of RRIDs (research resource identifiers) in the manuscript, and detects sentences that appear to be missing RRIDs. SciScore also checks to make sure that rigor criteria are addressed by authors. It does this by detecting sentences that discuss criteria such as blinding or power analysis. SciScore does not guarantee that the rigor criteria that it detects are appropriate for the particular study. Instead it assists authors, editors, and reviewers by drawing attention to sections of the manuscript that contain or should contain various rigor criteria and key resources. For details on the results shown here, including references cited, please follow this link.

  4. SciScore for 10.1101/2020.07.27.20162057: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board StatementMethods Participants 9 This project was approved by the Bowling Green State University Institutional Review Board on March 30, 2020 (#1562479-4).Randomizationnot detected.Blindingnot detected.Power Analysisnot detected.Sex as a biological variableThese statistics indicate that relative to census data, there was a higher proportion of males, Black/African Americans, and persons with higher educational attainment in this sample (retrieved from www.census.gov).

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:

    Limitations The survey was completed at a single point in time which limits causal inference. We are proposing a temporal sequence where COVID-19 objective risk exposure, demographic and health risk factors, and individual differences predict PPE use, post-trauma symptoms, and physical symptoms. However, the temporal sequence could have a different form. It could be that higher levels of distress, could cause a person to perceive themselves to be more vulnerable, more intolerant of uncertainty, and less mindful. A time series approach would help determine the plausibility of this argument. A second set of limitations is related to the participant selection process and the use of MTurk workers. In terms of the former, it may be that persons who were experiencing higher levels of distress were more apt to participate in the study. As a result, it cannot be determined how well the descriptive statistics and regression findings generalize to a broader USA 20 population. In relation to the latter limitation, there have been extensive analyses of the representativeness and characteristics of MTurk samples relative to other participant recruitment strategies (see Chandler & Shapiro, 2016). MTurk samples are more representative than college students and convenience samples drawn from small university communities, but less diverse in some ways than national probability samples. However, Chandler and Shapiro (2016) noted that the national probability samples are biased in that they rely on telephone methods which skews their results to older and more conservative participants. Additionally, as was evident in our sample, MTurk workers tend to have higher educational attainment and are more likely to be male. Finally, MTurk workers have been demonstrated to report higher levels of distress relative to other types of samples (Chandler & Shapiro, 2016). There are strengths to the use of an MTurk sample as well. Collection of data from a heterogenous sample allowed us to be more certain that effects did not pertain to a single geographic location, occupation, or type of participant. Although some research has shown that MTurk participants display higher rates of anxiety and depression, this phenomenon is at least partially combatted through screening for response quality (Ophir et al., 2019). Additionally, this feature of our data may have helped us avoid range restriction in our sample. Finally, although researchers have identified various threats to validity that may be possible in research conducted on a crowdsourced platform, such as subject inattentiveness, demand characteristics, and repeated participation, the present study utilized best practices to mitigate such threats. Specifically, the study utilized attention checks, data screening, and avoiding signaling cues (Cheung et al., 2017). 21


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore is not a substitute for expert review. SciScore checks for the presence and correctness of RRIDs (research resource identifiers) in the manuscript, and detects sentences that appear to be missing RRIDs. SciScore also checks to make sure that rigor criteria are addressed by authors. It does this by detecting sentences that discuss criteria such as blinding or power analysis. SciScore does not guarantee that the rigor criteria that it detects are appropriate for the particular study. Instead it assists authors, editors, and reviewers by drawing attention to sections of the manuscript that contain or should contain various rigor criteria and key resources. For details on the results shown here, including references cited, please follow this link.

  5. SciScore for 10.1101/2020.07.27.20162057: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board StatementMethods Participants 9 This project was approved by the Bowling Green State University Institutional Review Board on March 30, 2020 (#1562479-4).Randomizationnot detected.Blindingnot detected.Power Analysisnot detected.Sex as a biological variableThese statistics indicate that relative to census data, there was a higher proportion of males, Black/African Americans, and persons with higher educational attainment in this sample (retrieved from www.census.gov).

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:

    Limitations The survey was completed at a single point in time which limits causal inference. We are proposing a temporal sequence where COVID-19 objective risk exposure, demographic and health risk factors, and individual differences predict PPE use, post-trauma symptoms, and physical symptoms. However, the temporal sequence could have a different form. It could be that higher levels of distress, could cause a person to perceive themselves to be more vulnerable, more intolerant of uncertainty, and less mindful. A time series approach would help determine the plausibility of this argument. A second set of limitations is related to the participant selection process and the use of MTurk workers. In terms of the former, it may be that persons who were experiencing higher levels of distress were more apt to participate in the study. As a result, it cannot be determined how well the descriptive statistics and regression findings generalize to a broader USA 20 population. In relation to the latter limitation, there have been extensive analyses of the representativeness and characteristics of MTurk samples relative to other participant recruitment strategies (see Chandler & Shapiro, 2016). MTurk samples are more representative than college students and convenience samples drawn from small university communities, but less diverse in some ways than national probability samples. However, Chandler and Shapiro (2016) noted that the national probability samples are biased in that they rely on telephone methods which skews their results to older and more conservative participants. Additionally, as was evident in our sample, MTurk workers tend to have higher educational attainment and are more likely to be male. Finally, MTurk workers have been demonstrated to report higher levels of distress relative to other types of samples (Chandler & Shapiro, 2016). There are strengths to the use of an MTurk sample as well. Collection of data from a heterogenous sample allowed us to be more certain that effects did not pertain to a single geographic location, occupation, or type of participant. Although some research has shown that MTurk participants display higher rates of anxiety and depression, this phenomenon is at least partially combatted through screening for response quality (Ophir et al., 2019). Additionally, this feature of our data may have helped us avoid range restriction in our sample. Finally, although researchers have identified various threats to validity that may be possible in research conducted on a crowdsourced platform, such as subject inattentiveness, demand characteristics, and repeated participation, the present study utilized best practices to mitigate such threats. Specifically, the study utilized attention checks, data screening, and avoiding signaling cues (Cheung et al., 2017). 21


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore is not a substitute for expert review. SciScore checks for the presence and correctness of RRIDs (research resource identifiers) in the manuscript, and detects sentences that appear to be missing RRIDs. SciScore also checks to make sure that rigor criteria are addressed by authors. It does this by detecting sentences that discuss criteria such as blinding or power analysis. SciScore does not guarantee that the rigor criteria that it detects are appropriate for the particular study. Instead it assists authors, editors, and reviewers by drawing attention to sections of the manuscript that contain or should contain various rigor criteria and key resources. For details on the results shown here, including references cited, please follow this link.

  6. SciScore for 10.1101/2020.07.27.20162057: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board StatementMethods Participants 9 This project was approved by the Bowling Green State University Institutional Review Board on March 30, 2020 (#1562479-4).Randomizationnot detected.Blindingnot detected.Power Analysisnot detected.Sex as a biological variableThese statistics indicate that relative to census data, there was a higher proportion of males, Black/African Americans, and persons with higher educational attainment in this sample (retrieved from www.census.gov).

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:

    Limitations The survey was completed at a single point in time which limits causal inference. We are proposing a temporal sequence where COVID-19 objective risk exposure, demographic and health risk factors, and individual differences predict PPE use, post-trauma symptoms, and physical symptoms. However, the temporal sequence could have a different form. It could be that higher levels of distress, could cause a person to perceive themselves to be more vulnerable, more intolerant of uncertainty, and less mindful. A time series approach would help determine the plausibility of this argument. A second set of limitations is related to the participant selection process and the use of MTurk workers. In terms of the former, it may be that persons who were experiencing higher levels of distress were more apt to participate in the study. As a result, it cannot be determined how well the descriptive statistics and regression findings generalize to a broader USA 20 population. In relation to the latter limitation, there have been extensive analyses of the representativeness and characteristics of MTurk samples relative to other participant recruitment strategies (see Chandler & Shapiro, 2016). MTurk samples are more representative than college students and convenience samples drawn from small university communities, but less diverse in some ways than national probability samples. However, Chandler and Shapiro (2016) noted that the national probability samples are biased in that they rely on telephone methods which skews their results to older and more conservative participants. Additionally, as was evident in our sample, MTurk workers tend to have higher educational attainment and are more likely to be male. Finally, MTurk workers have been demonstrated to report higher levels of distress relative to other types of samples (Chandler & Shapiro, 2016). There are strengths to the use of an MTurk sample as well. Collection of data from a heterogenous sample allowed us to be more certain that effects did not pertain to a single geographic location, occupation, or type of participant. Although some research has shown that MTurk participants display higher rates of anxiety and depression, this phenomenon is at least partially combatted through screening for response quality (Ophir et al., 2019). Additionally, this feature of our data may have helped us avoid range restriction in our sample. Finally, although researchers have identified various threats to validity that may be possible in research conducted on a crowdsourced platform, such as subject inattentiveness, demand characteristics, and repeated participation, the present study utilized best practices to mitigate such threats. Specifically, the study utilized attention checks, data screening, and avoiding signaling cues (Cheung et al., 2017). 21


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore is not a substitute for expert review. SciScore checks for the presence and correctness of RRIDs (research resource identifiers) in the manuscript, and detects sentences that appear to be missing RRIDs. SciScore also checks to make sure that rigor criteria are addressed by authors. It does this by detecting sentences that discuss criteria such as blinding or power analysis. SciScore does not guarantee that the rigor criteria that it detects are appropriate for the particular study. Instead it assists authors, editors, and reviewers by drawing attention to sections of the manuscript that contain or should contain various rigor criteria and key resources. For details on the results shown here, including references cited, please follow this link.