Reliability, bias and randomisation in peer review: a simulation
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
For a variety of reasons, including a need to save time and a desire to reduce biases in outcomes, some funders of research have started to use partial randomisation in their funding decision processes. The effect that randomisation interventions have on the reliability of those processes should, it is argued, be a consideration in their use, but this key aspect of their implementation remains under-appreciated. Using a simple specification of a research proposal peer review process, simulations are carried out to explore the ways in which decision reliability, bias, extent of decision randomisation and other factors interact. As might be expected, based on both logic and existing knowledge, randomisation has the potential to reduce bias, but it may also reduce decision reliability as inferred from the F1 score and accuracy of a simulated binary (successful, rejected) decision outcome classification process. Bias is also found, in one sense and qualitatively, to be rather insensitive to partial randomisation as it is typically applied in real-world situations. The simple yet apparently effective specification of the simulation of reviewer scores implemented here may also provide insights into the distribution of merit across research funding proposals, and of assessment of them.