Pharmacometrics of high-dose ivermectin in early COVID-19 from an open label, randomized, controlled adaptive platform trial (PLATCOV)

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    This highly important paper uses a Bayesian linear regression approach in a clinical trial to establish that ivermectin does not increase the clearance rate of SARS-CoV-2 relative to no study drug. The strength of evidence is compelling. Particular strengths are that the paper is clearly written, a novel and important adaptive study design, and linear mixed modeling to account for participant heterogeneity. The work will be of interest to clinicians, statisticians, and public health departments.

This article has been Reviewed by the following groups

Read the full article

Abstract

There is no generally accepted methodology for in vivo assessment of antiviral activity in SARS-CoV-2 infections. Ivermectin has been recommended widely as a treatment of COVID-19, but whether it has clinically significant antiviral activity in vivo is uncertain.

Methods:

In a multicentre open label, randomized, controlled adaptive platform trial, adult patients with early symptomatic COVID-19 were randomized to one of six treatment arms including high-dose oral ivermectin (600 µg/kg daily for 7 days), the monoclonal antibodies casirivimab and imdevimab (600 mg/600 mg), and no study drug. The primary outcome was the comparison of viral clearance rates in the modified intention-to-treat population. This was derived from daily log 10 viral densities in standardized duplicate oropharyngeal swab eluates. This ongoing trial is registered at https://clinicaltrials.gov/ (NCT05041907).

Results:

Randomization to the ivermectin arm was stopped after enrolling 205 patients into all arms, as the prespecified futility threshold was reached. Following ivermectin, the mean estimated rate of SARS-CoV-2 viral clearance was 9.1% slower (95% confidence interval [CI] –27.2% to +11.8%; n=45) than in the no drug arm (n=41), whereas in a preliminary analysis of the casirivimab/imdevimab arm it was 52.3% faster (95% CI +7.0% to +115.1%; n=10 (Delta variant) vs. n=41).

Conclusions:

High-dose ivermectin did not have measurable antiviral activity in early symptomatic COVID-19. Pharmacometric evaluation of viral clearance rate from frequent serial oropharyngeal qPCR viral density estimates is a highly efficient and well-tolerated method of assessing SARS-CoV-2 antiviral therapeutics in vivo.

Funding:

‘Finding treatments for COVID-19: A phase 2 multi-centre adaptive platform trial to assess antiviral pharmacodynamics in early symptomatic COVID-19 (PLAT-COV)’ is supported by the Wellcome Trust Grant ref: 223195/Z/21/Z through the COVID-19 Therapeutics Accelerator.

Clinical trial number:

NCT05041907 .

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    This well-done platform trial identifies that ivermectin has no impact on SARS-CoV-2 viral clearance rate relative to no study drug while casirivimab lead to more rapid clearance at 5 days. The figures are simple and appealing. The study design is appropriate and the analysis is sound. The conclusions are generally well supported by the analysis. Study novelty is somewhat limited by the fact that ivermectin has already been definitively assessed and is known to lack efficacy against SARS-CoV-2. Several issues warrant addressing:

    1. Use of viral load clearance is not unique to this study and was part of multiple key trials studying paxlovid, remdesivir, molnupiravir, and monoclonal antibodies. The authors neglect to describe a substantial literature on viral load surrogate endpoints of therapeutic efficacy which exist for HIV, hepatitis B and C, Ebola, HSV-2, and CMV. For SARS-CoV-2, the story is more complicated as several drugs with proven efficacy were associated with a decrease in nasal viral loads whereas a trial of early remdesivir showed no reduction in viral load despite a 90% reduction in hospitalization. In addition, viral load kinetics have not been formally identified as a true surrogate endpoint. For maximal value, a reduction in viral load would be linked with a reduction in a hard clinical endpoint in the study (reduction in hospitalization and/or death, decreased symptom duration, etc...). This literature should be discussed and data on the secondary outcome, and reduction in hospitalization should be included to see if there is any relationship between viral load reduction and clinical outcomes.

    This is an important point and we thank the reviewer for raising it. We agree that there is a rich literature on the use of viral load kinetics in optimizing treatment of viral infectious diseases, and we are clearly not the first to think of it! We have added the following sentence in the discussion.

    “The method of assessing antiviral activity in early COVID-19 reported here builds on extensive experience of antiviral pharmacodynamic assessments in other viral infections.”

    We agree that more information is needed to link viral clearance measures to clinical outcomes. We have addressed this in the discussion as follows:

    “Using less frequent nasopharyngeal sampling in larger numbers of patients, clinical trials of monoclonal antibodies, molnupiravir and ritonavir-boosted nirmatrelvir, have each shown that accelerated viral clearance is associated with improved clinical outcomes [1,4,5]. These data suggest reduction in viral load could be used as a surrogate of clinical outcome in COVID-19. In contrast the PINETREE study, which showed that remdesivir significantly reduced disease progression in COVID-19, did not find an association between viral clearance and therapeutic benefit. This seemed to refute the usefulness of viral clearance rates as a surrogate for rates of clinical recovery [16]. However, the infrequent sampling in all these studies substantially reduced the precision of the viral clearance estimates (and thus increased the risk of type 2 errors). Using the frequent sampling employed in the PLATCOV study, we have shown recently that remdesivir does accelerate SARS-CoV-2 viral clearance [17], as would be expected from an efficacious antiviral drug. This is consistent with therapeutic responses in other viral infections [18, 19]. Taken together the weight of evidence suggests that accelerated viral clearance does reflect therapeutic efficacy in early COVID-19, although more information will be required to characterize this relationship adequately.”

    1. The statement that oropharyngeal swabs are much better tolerated than nasal swabs is subjective. More detail needs to be paid to the relative yield of these approaches.

    The statement is empirical. We know of other studies in progress where there are high rates of discontinuation because of patient intolerance of repeated nasopharyngeal sampling. Not one of 750 patients enrolled to date in PLATCOV has refused sampling, which we believe is useful information for research involving multiple sampling. This is clearly a critical point for pharmacodynamic studies.

    We agree that the optimal site of swabbing for SARS-CoV-2 and relative yields for the given test requirements (sensitivity vs quantification) need to be considered, although the literature on this is large and sometimes contradictory.

    We have added the following line:

    Oropharyngeal viral loads have been shown to be both more and less sensitive for the detection of SARS-CoV-2 infection. Although rates of clearance are very likely to be similar from the two body sites, this should be established for comparison with other studies.

    1. The stopping rules as they relate to previously modeled serial viral loads are not described in sufficient detail.

    The initial stopping rules were chosen based on previously modelled data (reference 11). We have added details to the text (lines 199-219):

    “Under the linear model, for each intervention, the treatment effect β is encoded as a multiplicative term on the time since randomisation: eβT, where T=1 if the patient was assigned the intervention, and zero otherwise. Under this specification β=0 implies no effect (no change in slope), and β>0 implies increase in slope relative to the population mean slope. Stopping rules are then defined with respect to the posterior distribution of β, with futility defined as Prob[β<λ]>0.9; and success defined as Prob[β>λ]>0.9, where λ≥0. Larger values of λ imply a smaller sample size to stop for futility but a larger sample size to stop for efficacy. λ was chosen so that it would result in reasonable sample size requirements, as was determined using a simulation approach based on previously modelled serial viral load data [11]. This modelling work suggested that a value of λ=log(1.05) [i.e. 5% increase] would requireapproximately 50 patients to demonstrate increases in the rate of viral clearance of ~50%, with control of both type 1 and type 2 errors at 10%. The first interim analysis (n=50) was prespecified as unblinded in order to review the methodology and the stopping rules (notably the value of λ). Following this, the stopping threshold was increased from 5% to 12.5% [λ=log(1.125)] because the treatment effect of casirivimab/imdevimab against the SARS-CoV-2 Delta variant was larger than expected and the estimated residual error was greater than previously estimated. Thereafter trial investigators were blinded to the virus clearance results. Interim analyses were planned every batch of additional 25 patients’ PCR data however, because of delays in setting up the PCR analysis pipeline, the second interim analysis was delayed until April 2022. By that time data from 145 patients were available (29 patients randomised to ivermectin and 26 patients randomized to no study drug).”

    1. The lack of blinding limits any analysis of symptomatic outcomes.

    We added this line to the discussion:

    “Finally, although not primarily a safety study, the lack of blinding compromises safety or tolerability assessments.”

    1. It is unclear whether all 4 swabs from 2 tonsils are aggregated. Are the swabs placed in a single tube and analyzed?

    The data are not aggregated but treated as independent and identically distributed under the linear model. 4 swabs were taken at randomization, followed by two at each follow-up visit. We have added line 183:

    “[..] (18 measurements per patient, each swab is treated as as independent and identically distributed conditional on the model).”

    Swabs were stored separately and not aggregated.

    1. In supplementary Figure 7, both models do well in most circumstances but fail in the relatively common event of non-monotonic viral kinetics (multiple peaks, rebound events). Given the importance of viral rebound during paxlovid use, an exploratory secondary analysis of this outcome would be welcome.

    Thank you for the suggestion. We agree, although the primary goal is to estimate the mean change in slope. Rebound is a relatively rare event and tends to occur after the first seven days of illness in which we are assessing rate of clearance.

    Nevertheless, we agree that this is an important point. It remains unclear how to model viral rebound. In over 700 profiles now available from the study, only a few have strong evidence of viral rebound.

    Reviewer #2 (Public Review):

    This manuscript details the analytic methods and results of one arm of the PLATCOV study, an adaptive platform designed to evaluate low-cost COVID-19 therapeutics through enrollment of a comparatively smaller number of persons with acute COVID-19, with the goal of evaluating the rate of decrease in SARS-CoV-2 clearance compared to no treatment through frequent swabbing of the oropharynx and a Bayesian linear regression model, rather than clinical outcomes or the more routinely evaluated blunt virologic outcomes employed in larger trials. Presented here, is the in vivo virologic analysis of ivermectin, with a very small sample of participants who received the casirivimab/imdevimab, a drug shown to be highly effective at preventing COVID-19 progression and improving viral clearance (during circulation of variants to which it had activity) included for comparison for model evaluation.

    The manuscript is well-written and clear. It could benefit however from adding a few clarifications on methods and results to further strengthen the discussion of the model and accurately report the results, as detailed below.

    Strengths of this study design and its report include:

    1. Selection of participants with presumptive high viral loads or viral burden by antigen test, as prior studies have shown difficulty in detecting effect in those with a lower viral burden.
    1. Adaptive sample size based on modeling- something that fell short in other studies based on changing actuals compared to assumptions, depending on circulating variant and "risk" of patients (comorbidities, vaccine state, etc) over time. There have been many other negative studies because the a priori outcomes assumptions were different from the study design to the time of enrollment (or during the enrollment period). This highlight of the trial should be emphasized more fully in the discussion.
    1. Higher dose and longer course of ivermectin than TOGETHER trial and many other global trials: 600ug/kg/day vs 400mcg/kg/day.
    1. Admission of trial participants for frequent oropharyngeal swabbing vs infrequent sampling and blunter analysis methods used in most reported clinical trials
    1. Linear mixed modeling allows for heterogeneity in participants and study sites, especially taking the number of vaccine doses, variant, age, and serostatus into account- all important variables that are not considered in more basic analyses.
    1. The novel outcome being the change in the rate of viral clearance, rather than time to the undetectable or unquantifiable virus, which is sensitive, despite a smaller sample size
    1. Discussion highlights the importance of frequent oral sampling and use of this modeled outcome for the design of both future COVID-19 studies and other respiratory viral studies, acknowledging that there are no accepted standards for measuring virologic or symptom outcomes, and many studies have failed to demonstrate such effects despite succeeding at preventing progression to severe clinical outcomes such as hospitalization or death. This study design and analyses are highly important for the design of future studies of respiratory viral infections or possibly early-phase hepatitis virus infections.

    Weaknesses or room for improvement:

    1. The methods do not clearly describe allocation to either ivermectin or casirivimab/imdevimab or both or neither. Yes, the full protocol is included, but the platform randomization could be briefly described more clearly in the methods section.

    We have added additional text to the Methods:

    “The no study drug arm comprised a minimum proportion of 20% and uniform randomization ratios were then applied across the treatment arms. For example, for 5 intervention arms and the no study drug arm, 20% of patients would be randomized to no study drug and 16% to each of the 5 interventions. Additional details on the randomization are provided in the Supplementary Materials. All patients received standard symptomatic treatment.”

    1. The handling of unquantifiable or undetectable viruses in the models is not clear in either the manuscript or supplemental statistical analysis information. Are these values imputed, or is data censored once below the limits of quantification or detection? How does the model handle censored data, if applicable?

    We have added lines 185-186:

    “Viral loads below the lower limit of quantification (CT values ≥40) were treated as left-censored under the model with a known censoring value.”

    1. Did the study need to be unblinded prior to the first interim analysis? Could the adaptive design with the first analysis have been done with only one or a subset of statisticians unblinded prior to the decision to stop enrolling in the ivermectin arm?

    The unblinded interim analysis was done on the first 50 patients enrolled in the study. The study at that time was enrolling into five arms including ivermectin, casirivimab-imdevimab, remdesivir, favipiravir, and a no study drug arm (there were exactly 10 per arm as a result of the block randomization).

    The main rationale for making this interim analysis unblinded was to determine the most reasonable value of λ (this defines stopping for futility/success), which is a trade-off between information gain, reasonable sample size expectations, and the balance between quickly identifying interventions which have antiviral activity versus the certainty of stopping for futility.

    Once the value of 12.5% was decided, the trial investigators remained blinded to the results until the stopping rules were met and the unblinded statistician discussed with the independent Data Safety and Management Board who agreed to unblind the ivermectin arm.

    1. Can the authors comment on why the interim analysis occurred prior to the enrollment of 50 persons in each of the ivermectin and comparison arms? Even though the sample sizes were close (41 and 45 persons), the trigger for interim analysis was pre-specified.

    After the first interim analysis at 50 patients enrolled into the study, they were planned every additional 25 patients (i.e. very frequently). The trigger for the interim analysis was not 50 patients into a specific arm, but 50 patients in total, and thereafter were planned to occur with every 25 new patients enrolled into the study. In practice there were backlogs in the data pipeline (which we explain), and interim analyses occurred less frequently than planned- the second one being in April 2022.

    1. The reporting of percent change for the intervention arms is overstated. All credible intervals cross zero: the clearance for ivermectin is stated to be 9% slower, but the CI includes + and - %, so it should be reported as "not different." Similarly, and more importantly for casirivimab/imdevimab, it was reported to be 52% faster, although the CI is -7.0 to +115%. This is likely a real difference, but with ten participants underpowered- and this is good to discuss. Instead, please report that the estimate was faster, but that it was not statistically significant. Similarly, the clearance half-life for ivermectin is not different, rather than "slower" as reported (CI was -2 to +6.6 hours). This result was however statistically significant for casirivimab/imdevimab.

    Thank you for your comments. The confidence interval for casirivimab/imdevimab did not cross zero and was +7.0 to +115.1%, and we thank the reviewer for picking up the error in the results section (it was correct in the abstract) where it was written -7.0 to +115.1%. We have made this correction. Elsewhere, we have provided more precise language to discriminate clinical significance from statistical significance, as per the essential revisions.

    1. While the use of oropharyngeal swabs is relatively novel for a clinical trial, and they have been validated for diagnostic purposes, the results of this study should discuss external validity, especially with respect to results from other studies that mainly use nasopharyngeal or nasal swab results. For example, oropharyngeal viral loads have been variably shown to be more sensitive for the detection of infection, or conversely to have 1-log lower viral loads compared to NP swabs. Because these models look for longitudinal change within a single sampling technique, they do not impact internal validity but may impact comparisons to other studies or future study designs.

    We have added the following sentence to the discussion:

    “Oropharyngeal viral loads have been shown to be both more and less sensitive for the detection of SARS-CoV-2 infection. Although rates of viral clearance are very likely to be similar from the two sites, this should be established for comparison with other studies.”

    1. Caution should be used around the term "clinically significant" for viral clearance. There is not an agreed-upon rate of clinically significant clearance, nor is there a log10 threshold that is agreed to be non-transmissible despite moderately strong correlations with the ability to culture virus or with antigen results at particular thresholds.

    We agree. We have addressed this partly in our response to Reviewer 1.

    1. Additional discussion could also clarify that certain drugs, such as remdesivir, have shown in vivo activity in the lungs of animal models and improvement in clinical outcomes in people, but without change in viral endpoints in nasopharyngeal samples (PINETREE study, Gottlieb, NEJM 2022). Therefore, this model must be interpreted as no evidence of antiviral activity in the pharyngeal compartment, rather than a complete lack of in vivo activity of agents given the limitations of accessible and feasible sampling. That said, strongly agree with the authors about the conclusion that ivermectin is also likely to lack activity in humans based on the results of this study and many other clinical studies combined.

    As above this has been addressed in our response to Reviewer 1.

    Reviewer #3 (Public Review):

    This is a well-conducted phase 2 randomized trial testing outpatient therapeutics for Covid-19. In this report of the platform trial, they test ivermectin, demonstrating no virologic effect in humans with Covid-19.

    Overall, the authors' conclusions are supported by the data.

    The major contribution is their implementation of a new model for Phase 2 trial design. Such designs would have been ideal earlier in the pandemic.

    We thank the reviewer for their encouraging comments.

  2. eLife assessment

    This highly important paper uses a Bayesian linear regression approach in a clinical trial to establish that ivermectin does not increase the clearance rate of SARS-CoV-2 relative to no study drug. The strength of evidence is compelling. Particular strengths are that the paper is clearly written, a novel and important adaptive study design, and linear mixed modeling to account for participant heterogeneity. The work will be of interest to clinicians, statisticians, and public health departments.

  3. Reviewer #1 (Public Review):

    This well-done platform trial identifies that ivermectin has no impact on SARS-CoV-2 viral clearance rate relative to no study drug while casirivimab lead to more rapid clearance at 5 days. The figures are simple and appealing. The study design is appropriate and the analysis is sound. The conclusions are generally well supported by the analysis. Study novelty is somewhat limited by the fact that ivermectin has already been definitively assessed and is known to lack efficacy against SARS-CoV-2. Several issues warrant addressing:

    1. Use of viral load clearance is not unique to this study and was part of multiple key trials studying paxlovid, remdesivir, molnupiravir, and monoclonal antibodies. The authors neglect to describe a substantial literature on viral load surrogate endpoints of therapeutic efficacy which exist for HIV, hepatitis B and C, Ebola, HSV-2, and CMV. For SARS-CoV-2, the story is more complicated as several drugs with proven efficacy were associated with a decrease in nasal viral loads whereas a trial of early remdesivir showed no reduction in viral load despite a 90% reduction in hospitalization. In addition, viral load kinetics have not been formally identified as a true surrogate endpoint. For maximal value, a reduction in viral load would be linked with a reduction in a hard clinical endpoint in the study (reduction in hospitalization and/or death, decreased symptom duration, etc...). This literature should be discussed and data on the secondary outcome, and reduction in hospitalization should be included to see if there is any relationship between viral load reduction and clinical outcomes.

    2. The statement that oropharyngeal swabs are much better tolerated than nasal swabs is subjective. More detail needs to be paid to the relative yield of these approaches.

    3. The stopping rules as they relate to previously modeled serial viral loads are not described in sufficient detail.

    4. The lack of blinding limits any analysis of symptomatic outcomes.

    5. It is unclear whether all 4 swabs from 2 tonsils are aggregated. Are the swabs placed in a single tube and analyzed?

    6. In supplementary Figure 7, both models do well in most circumstances but fail in the relatively common event of non-monotonic viral kinetics (multiple peaks, rebound events). Given the importance of viral rebound during paxlovid use, an exploratory secondary analysis of this outcome would be welcome.

  4. Reviewer #2 (Public Review):

    This manuscript details the analytic methods and results of one arm of the PLATCOV study, an adaptive platform designed to evaluate low-cost COVID-19 therapeutics through enrollment of a comparatively smaller number of persons with acute COVID-19, with the goal of evaluating the rate of decrease in SARS-CoV-2 clearance compared to no treatment through frequent swabbing of the oropharynx and a Bayesian linear regression model, rather than clinical outcomes or the more routinely evaluated blunt virologic outcomes employed in larger trials. Presented here, is the in vivo virologic analysis of ivermectin, with a very small sample of participants who received the casirivimab/imdevimab, a drug shown to be highly effective at preventing COVID-19 progression and improving viral clearance (during circulation of variants to which it had activity) included for comparison for model evaluation.

    The manuscript is well-written and clear. It could benefit however from adding a few clarifications on methods and results to further strengthen the discussion of the model and accurately report the results, as detailed below.

    Strengths of this study design and its report include:
    1. Selection of participants with presumptive high viral loads or viral burden by antigen test, as prior studies have shown difficulty in detecting effect in those with a lower viral burden.
    2. Adaptive sample size based on modeling- something that fell short in other studies based on changing actuals compared to assumptions, depending on circulating variant and "risk" of patients (comorbidities, vaccine state, etc) over time. There have been many other negative studies because the a priori outcomes assumptions were different from the study design to the time of enrollment (or during the enrollment period). This highlight of the trial should be emphasized more fully in the discussion.
    3. Higher dose and longer course of ivermectin than TOGETHER trial and many other global trials: 600ug/kg/day vs 400mcg/kg/day.
    4. Admission of trial participants for frequent oropharyngeal swabbing vs infrequent sampling and blunter analysis methods used in most reported clinical trials
    5. Linear mixed modeling allows for heterogeneity in participants and study sites, especially taking the number of vaccine doses, variant, age, and serostatus into account- all important variables that are not considered in more basic analyses.
    6. The novel outcome being the change in the rate of viral clearance, rather than time to the undetectable or unquantifiable virus, which is sensitive, despite a smaller sample size
    7. Discussion highlights the importance of frequent oral sampling and use of this modeled outcome for the design of both future COVID-19 studies and other respiratory viral studies, acknowledging that there are no accepted standards for measuring virologic or symptom outcomes, and many studies have failed to demonstrate such effects despite succeeding at preventing progression to severe clinical outcomes such as hospitalization or death. This study design and analyses are highly important for the design of future studies of respiratory viral infections or possibly early-phase hepatitis virus infections.

    Weaknesses or room for improvement:

    1. The methods do not clearly describe allocation to either ivermectin or casirivimab/imdevimab or both or neither. Yes, the full protocol is included, but the platform randomization could be briefly described more clearly in the methods section.
    2. The handling of unquantifiable or undetectable viruses in the models is not clear in either the manuscript or supplemental statistical analysis information. Are these values imputed, or is data censored once below the limits of quantification or detection? How does the model handle censored data, if applicable?
    3. Did the study need to be unblinded prior to the first interim analysis? Could the adaptive design with the first analysis have been done with only one or a subset of statisticians unblinded prior to the decision to stop enrolling in the ivermectin arm?
    4. Can the authors comment on why the interim analysis occurred prior to the enrollment of 50 persons in each of the ivermectin and comparison arms? Even though the sample sizes were close (41 and 45 persons), the trigger for interim analysis was pre-specified.
    5. The reporting of percent change for the intervention arms is overstated. All credible intervals cross zero: the clearance for ivermectin is stated to be 9% slower, but the CI includes + and - %, so it should be reported as "not different." Similarly, and more importantly for casirivimab/imdevimab, it was reported to be 52% faster, although the CI is -7.0 to +115%. This is likely a real difference, but with ten participants underpowered- and this is good to discuss. Instead, please report that the estimate was faster, but that it was not statistically significant. Similarly, the clearance half-life for ivermectin is not different, rather than "slower" as reported (CI was -2 to +6.6 hours). This result was however statistically significant for casirivimab/imdevimab.
    6. While the use of oropharyngeal swabs is relatively novel for a clinical trial, and they have been validated for diagnostic purposes, the results of this study should discuss external validity, especially with respect to results from other studies that mainly use nasopharyngeal or nasal swab results. For example, oropharyngeal viral loads have been variably shown to be more sensitive for the detection of infection, or conversely to have 1-log lower viral loads compared to NP swabs. Because these models look for longitudinal change within a single sampling technique, they do not impact internal validity but may impact comparisons to other studies or future study designs.
    7. Caution should be used around the term "clinically significant" for viral clearance. There is not an agreed-upon rate of clinically significant clearance, nor is there a log10 threshold that is agreed to be non-transmissible despite moderately strong correlations with the ability to culture virus or with antigen results at particular thresholds.
    8. Additional discussion could also clarify that certain drugs, such as remdesivir, have shown in vivo activity in the lungs of animal models and improvement in clinical outcomes in people, but without change in viral endpoints in nasopharyngeal samples (PINETREE study, Gottlieb, NEJM 2022). Therefore, this model must be interpreted as no evidence of antiviral activity in the pharyngeal compartment, rather than a complete lack of in vivo activity of agents given the limitations of accessible and feasible sampling. That said, strongly agree with the authors about the conclusion that ivermectin is also likely to lack activity in humans based on the results of this study and many other clinical studies combined.

  5. Reviewer #3 (Public Review):

    This is a well-conducted phase 2 randomized trial testing outpatient therapeutics for Covid-19. In this report of the platform trial, they test ivermectin, demonstrating no virologic effect in humans with Covid-19.

    Overall, the authors' conclusions are supported by the data.

    The major contribution is their implementation of a new model for Phase 2 trial design. Such designs would have been ideal earlier in the pandemic.