Experiment-based calibration: inference and decision-making

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Experiment-based calibration is an emerging approach for measurement validation. It allows comparing multiple measurement methods against each other by how well they reproduce a known experimental effect, which is informative about measurement accuracy. Calibration entails statistical questions unparal- leled in classical validation approaches. The first question is about inference: when should we conclude that one measurement method is truly more accurate than another? The second is about decisions: when should we decide that a method merits the investment of changing a measurement system? In this note, we review the particular challenges that arise in the context of a calibration process: a potentially large and a priori unknown number of measurement methods, a requirement to integrate evidence across multiple calibration samples, and a possibility that some methods are not available for all samples. We show that Bayesian meta-analytic model comparison is a suitable framework for inference in calibration, and propose a decision-theoretic approach to calculate immediate economic gain garnered through reduced sample sizes. In order to overcome the practical hurdles associated with the analysis of calibration experiments, we furnish calibr, an R package for calibration inference.

Article activity feed