Bounding the Accuracy of Diagnostic Tests, With Application to COVID-19 Antibody Tests
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (ScreenIT)
Abstract
Tests used to diagnose illness commonly have imperfect accuracy, with some false-positive and negative results. For risk assessment and clinical decisions, predictive values are of interest. Positive predictive value (PPV) is the chance that a member of a relevant population who tests positive has been ill. Negative predictive value (NPV) is the chance that someone who tests negative has not been ill. The medical literature regularly reports sensitivity and specificity. Sensitivity is the chance that an ill person receives a positive test result. Specificity is the chance that a nonill person receives a negative result. Knowledge of sensitivity and specificity enables one to predict the test result given a person’s illness status. These predictions are not directly relevant to patient care but, given knowledge of sensitivity and specificity, PPV and NPV can be derived if one knows the prevalence of the disease, the population rate of illness. There is considerable uncertainty about the prevalence of some diseases, a notable case being COVID-19. This paper addresses the problem of identification of PPV and NPV given knowledge of sensitivity and specificity and given bounds on prevalence. I explain the problem and show how to bound PPV and NPV as well as the risk ratio and difference, which are functions thereof. I apply the findings to COVID-19 antibody tests. I question the realism of supposing that sensitivity and specificity are known.
Article activity feed
-
-
SciScore for 10.1101/2020.05.14.20102061: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
No key resources detected.
Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: We did not find any issues relating to colormaps.
Results from rtransparent:- Thank…
SciScore for 10.1101/2020.05.14.20102061: (What is this?)
Please note, not all rigor criteria are appropriate for all manuscripts.
Table 1: Rigor
NIH rigor criteria are not applicable to paper type.Table 2: Resources
No key resources detected.
Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).
Results from LimitationRecognizer: An explicit section about the limitations of the techniques employed in this study was not found. We encourage authors to address study limitations.Results from TrialIdentifier: No clinical trial numbers were referenced.
Results from Barzooka: We did not find any issues relating to the usage of bar graphs.
Results from JetFighter: We did not find any issues relating to colormaps.
Results from rtransparent:- Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
- Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
- No protocol registration statement was detected.
-