I am not quite sure at this point how I found this post on an empirical law blog about matching versus IV but it is worth noting.
I agree with the "correspondent" mentioned in the paper "that both techniques had their strengths and their relative superiority depended upon the research question and the available data". I would go further and argue that the whole question of "matching versus IV" makes little sense and shows a lack of appreciation for the basic issues. It is a bit like arguing about whether you would rather have a screwdriver or a wrench without knowing what task it is you are trying to accomplish and what other materials you will have available to accomplish it.
Two other points are worth noting as well:
1. Matching and IV in general estimate different parameters in a heterogeneous treatment effects world. Except in the (I would argue) unusual case of instruments not correlated with impacts (all cost-based instruments are not in this category), IV estimates some sort of local average treatment effect. Some LATEs are of great interest but a LATE is nonetheless a different bird than the average treatment effect on the treated which is what is typically estimated using matching. Sorting out these different estimands (or in some cases even noting their separate existence) continues to prove a challenge for some parts of the literature.
2. I am not sure that I agree with the disciplinary difference claim. Both matching and IV have their advocates in economics, and you can find people who think that the conditional independence assumption (aka "unconfoundedness") that underlies matching is always true and who think that it is never true. I would say that it depends and also that we can learn from both methods even when there assumptions are only approximately true, especially in cases where we are unlikely ever to have an experiment or a really good instrument or a data set that contains every variable that theory or existing empirical knowledge suggests is necessary for the CIA.
My sense is that political science is going throught the non- and semi-parametric upheaval that economics went through about a decade ago. It did a lot of good in economics in the long run to get researchers thinking about functional form (and by imposing a small implicit coolness penalty on making strong assumptions) but the transition dynamics involve a lot of weak papers as researchers learn-by-doing the new methodologies. I refereed some truly horrific matching papers back in the late 90s but my sense is that the average quality has been increasing over time with increased knowledge dissemination withi the profession. At the same time, at PolMeth XXV this summer there were folks doing both IV and matching.
Whew.
8 years ago