Logo
Munich Personal RePEc Archive

Inference in Differences-in-Differences with Few Treated Groups and Heteroskedasticity

Ferman, Bruno and Pinto, Cristine (2015): Inference in Differences-in-Differences with Few Treated Groups and Heteroskedasticity.

Warning
There is a more recent version of this item available.
[thumbnail of MPRA_paper_68271.pdf]
Preview
PDF
MPRA_paper_68271.pdf

Download (1MB) | Preview

Abstract

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, inference in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. We only need to know the structure of the heteroskedasticity of a linear combination of the errors, which implies that we do not need strong assumptions on the intra-group and serial correlation structure of the errors. Our method provided accurate hypothesis testing with one treated and 24 control groups in simulations with real datasets. Finally, we also show that an inference method for the Synthetic Control Estimator proposed by Abadie et al. (2010) can correct for the heteroskedasticity problem, and derive conditions under which this inference method provides accurate hypothesis testing.

Available Versions of this Item

Atom RSS 1.0 RSS 2.0

Contact us: mpra@ub.uni-muenchen.de

This repository has been built using EPrints software.

MPRA is a RePEc service hosted by Logo of the University Library LMU Munich.