Open Science in Impact Evaluation: What Impact Evaluators can Learn from the Replication Crisis in Social Psychology

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Since 2011, the academic field of social psychology has been undergoing a crisis of confidence in its results. This so-called "replication crisis" has inspired a widespread examination of the practices that can lead to less than credible results, the incentives that inspire such practices, and the practices that could protect or enhance research credibility. We believe that the credibility of the field of impact evaluation may be similarly threatened by incentives to use poor research practices. We assess the history of both social psychology and impact evaluation and conclude that, although the source of the incentives for impact evaluators is different in kind than those for social psychologists, impact evaluators may be similarly incentivized to make their results look better than they really are. We review three parallels in the history of the two fields, including the use of research as sales, the pressurized, competitive environments in which the work takes place, and the use of research methods as rhetorical devices. Impact evaluators may be able to learn from the credibility-enhancing solutions that worked for social psychology, though the field can consider new tools, such as standards, investment mechanisms, administrative data labs, and innovations inevidence-sharing and synthesis. Rather than sleepwalking through a crisis we cannot see, impact evaluators should consider that their field may be in crisis, a crisis that may require widespread and collective efforts –a credibility renaissance – to discover and deploy methods to protect its legitimacy.

Article activity feed