Choosing among anchored indirect comparison methods in health technology assessment: simulation evidence and a practical decision framework
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Background Decision makers in health technology assessment rely on network meta-analysis to synthesize incomplete head to head evidence, yet imbalances in effect modifiers can bias transported relative effects. Adjustment options include study-level and multilevel meta-regression, matching-adjusted indirect comparison, simulated treatment comparison, and network meta-interpolation. Comparative performance under graded violations of shared effect modification and practical guidance that links method choice to testable assumptions remain limited. Methods We built a four-trial anchored network and defined Trial 3 as the target population. Binary outcomes were generated on the probit scale with two covariates. We examined three scenarios for shared effect modification, namely shared, stronger interactions for treatment C by one half, and opposite directions. Each scenario had fifty replications. All methods estimated B versus C in the Trial 3 population on the probit scale. Performance metrics were bias, root mean squared error, and empirical coverage of the nominal ninety-five percent interval. Results ML-NMR showed the smallest bias and coverage closest to the nominal level across scenarios. Study-level NMR maintained reasonable precision when shared effect modification held but showed wider intervals and higher bias when interactions differed. MAIC and STC were nearly unbiased when shared effect modification was valid and key modifiers were correctly identified, but both developed bias and undercoverage when interaction strength or direction diverged. NMI was comparatively stable, with reliability dependent on covariance stability. Conventional NMA performed worst under pronounced covariate mismatch. Conclusions Analysts should first test prerequisite assumptions, including shared effect modification and homoscedasticity when relevant. Aligning method choice with verified assumptions reduces misspecification risk in health technology assessment and supports transparent, credible decisions.