Side-by-Side Comparison of Three Fully Automated SARS-CoV-2 Antibody Assays with a Focus on Specificity

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Background

In the context of the COVID-19 pandemic, numerous new serological test systems for the detection of anti-SARS-CoV-2 antibodies rapidly have become available. However, the clinical performance of many of these is still insufficiently described. Therefore, we compared 3 commercial CE-marked, SARS-CoV-2 antibody assays side by side.

Methods

We included a total of 1154 specimens from pre-COVID-19 times and 65 samples from COVID-19 patients (≥14 days after symptom onset) to evaluate the test performance of SARS-CoV-2 serological assays by Abbott, Roche, and DiaSorin.

Results

All 3 assays presented with high specificities: 99.2% (98.6–99.7) for Abbott, 99.7% (99.2–100.0) for Roche, and 98.3% (97.3–98.9) for DiaSorin. In contrast to the manufacturers’ specifications, sensitivities only ranged from 83.1% to 89.2%. Although the 3 methods were in good agreement (Cohen’s Kappa 0.71–0.87), McNemar tests revealed significant differences between results obtained from Roche and DiaSorin. However, at low seroprevalences, the minor differences in specificity resulted in profound discrepancies of positive predictive values at 1% seroprevalence: 52.3% (36.2–67.9), 77.6% (52.8–91.5), and 32.6% (23.6–43.1) for Abbott, Roche, and DiaSorin, respectively.

Conclusion

We found diagnostically relevant differences in specificities for the anti-SARS-CoV-2 antibody assays by Abbott, Roche, and DiaSorin that have a significant impact on the positive predictive values of these tests.

Article activity feed

  1. SciScore for 10.1101/2020.06.04.20117911: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Institutional Review Board StatementConsent: All included participants gave written informed consent for donating their samples for scientific purposes.
    IRB: It was reviewed and approved by the ethics committee of the Medical University of Vienna (1424/2020).
    Randomizationnot detected.
    Blindingnot detected.
    Power Analysisnot detected.
    Sex as a biological variablenot detected.

    Table 2: Resources

    Software and Algorithms
    SentencesResources
    Diagnostic sensitivity and specificity, as well as positive and negative predictive values, were calculated using MedCalc software 19.2.1 (MedCalc Ltd., Ostend, Belgium).
    MedCalc
    suggested: (MedCalc, RRID:SCR_015044)
    Figures were produced with MedCalc software 19.2.1 and GraphPad Prism 8 (GraphPad Software, San Diego, USA).
    GraphPad
    suggested: (GraphPad Prism, RRID:SCR_002798)

    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Limitations are the moderate numbers of positive samples. Moreover, obtained sensitivities cannot easily be compared to other studies because of the unique feature of our COVID-19 cohort, including 80% non-hospitalized patients with mainly mild symptoms. The latter is highly relevant for a potential use of antibody tests to assess seroprevalence in large populations.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We found bar graphs of continuous data. We recommend replacing bar graphs with more informative graphics, as many different datasets can lead to the same bar graph. The actual data may suggest different conclusions from the summary statistics. For more information, please see Weissgerber et al (2015).


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.