Another Caution for Difference-in-Differences: Expected Gains
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Many interventions are based on voluntary participation in the treatment group and difference-in-differences (DID) models frequently are used to estimate the effect of the treatment on treatment group versus the untreated control group. Expected gains in the form of resolve or capacity to adhere to the intervention are likely to be unobserved by the analyst and affect outcomes only after the subject learns the actual content of the intervention effect. When an omitted variable is both time-varying and subject-varying, it will not be undetectable by all the usual DID specification tests, including tests of the parallel trends assumption, and will not be corrected by the standard two-way fixed effect model. Both the internal and external validity of estimated treatment effect can be threatened, whether the estimates are biased from a policy standpoint depends on how the intervention will be expanded if it proves to be successful. When the analyst suspects that unobserved expected gains are a source of bias in a DID model, there are a number of appropriate econometric methods available that double as specification tests. We provide a simulation example to show how the problem arises, and how it can be addressed.