Second round of an interlaboratory comparison of SARS-CoV2 molecular detection assays used by 45 veterinary diagnostic laboratories in the United States

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

The COVID-19 pandemic presents a continued public health challenge. Veterinary diagnostic laboratories in the United States use RT-rtPCR for animal testing, and many laboratories are certified for testing human samples; hence, ensuring that laboratories have sensitive and specific SARS-CoV2 testing methods is a critical component of the pandemic response. In 2020, the FDA Veterinary Laboratory Investigation and Response Network (Vet-LIRN) led an interlaboratory comparison (ILC1) to help laboratories evaluate their existing RT-rtPCR methods for detecting SARS-CoV2. All participating laboratories were able to detect the viral RNA spiked in buffer and PrimeStore molecular transport medium (MTM). With ILC2, Vet-LIRN extended ILC1 by evaluating analytical sensitivity and specificity of the methods used by participating laboratories to detect 3 SARS-CoV2 variants (B.1; B.1.1.7 [Alpha]; B.1.351 [Beta]) at various copy levels. We analyzed 57 sets of results from 45 laboratories qualitatively and quantitatively according to the principles of ISO 16140-2:2016. More than 95% of analysts detected the SARS-CoV2 RNA in MTM at ≥500 copies for all 3 variants. In addition, for nucleocapsid markers N1 and N2, 81% and 92% of the analysts detected ≤20 copies in the assays, respectively. The analytical specificity of the evaluated methods was >99%. Participating laboratories were able to assess their current method performance, identify possible limitations, and recognize method strengths as part of a continuous learning environment to support the critical need for the reliable diagnosis of COVID-19 in potentially infected animals and humans.

Article activity feed

  1. SciScore for 10.1101/2022.04.08.22273621: (What is this?)

    Please note, not all rigor criteria are appropriate for all manuscripts.

    Table 1: Rigor

    Ethicsnot detected.
    Sex as a biological variablenot detected.
    RandomizationIn Study-2 and Study-3, two sets of randomly chosen ILC2 samples (see their preparation below) were analyzed using the procedure described above: the first set (Study-2) was analyzed prior to the shipment day and the second set (Study-3) was analyzed two days after the shipment day when the ILC2 participants started to test their samples.
    BlindingA total of 59 sets of blind-coded samples were shipped on dry ice to the 45 participating laboratories (14 laboratories requested a second set of samples to test two methods or to test by two analysts).
    Power Analysisnot detected.

    Table 2: Resources

    No key resources detected.


    Results from OddPub: We did not detect open data. We also did not detect open code. Researchers are encouraged to share open data when possible (see Nature blog).


    Results from LimitationRecognizer: We detected the following sentences addressing limitations in the study:
    Organizers processed the submitted data using various statistical approaches that allowed them to identify possible weaknesses and strengths of methods used, and offer suggestions on improving participants’ performance in the future. Specifically, an important finding of the ILC2 is that individual analysts used different decision-making criteria during interpretation of similar datasets. This indicates a need for laboratories to review data from this exercise and potentially reassess their decision-making criteria during interpretation of Ct values when using multiple markers. The ILC2 study also indicates that the false negative rate and sensitivity of some methods can be improved if Ct cut-off values used are re-evaluated (e.g., on a test that a too stringent Ct cut-off value was originally used) and optimized by analysts accordingly. In the current era of rapidly developing methodology and lack of international standards, participation in ILCs like this study is very beneficial. In contrast to other types of proficiency testing exercises that only aim to assess which results are correct or incorrect, this ILC revealed much more about the methods used and assist participants in continuous efforts to improve performance.

    Results from TrialIdentifier: No clinical trial numbers were referenced.


    Results from Barzooka: We did not find any issues relating to the usage of bar graphs.


    Results from JetFighter: We did not find any issues relating to colormaps.


    Results from rtransparent:
    • Thank you for including a conflict of interest statement. Authors are encouraged to include this statement when submitting to a journal.
    • Thank you for including a funding statement. Authors are encouraged to include this statement when submitting to a journal.
    • No protocol registration statement was detected.

    Results from scite Reference Check: We found no unreliable references.


    About SciScore

    SciScore is an automated tool that is designed to assist expert reviewers by finding and presenting formulaic information scattered throughout a paper in a standard, easy to digest format. SciScore checks for the presence and correctness of RRIDs (research resource identifiers), and for rigor criteria such as sex and investigator blinding. For details on the theoretical underpinning of rigor criteria and the tools shown here, including references cited, please follow this link.