Wednesday, December 9, 2009

Freakonomics and experiments

The Freakonomics blog, with a rhetorical blush, posts a positive review of Superfreakonomics from regular blog contributor Ian Ayres.

Ayres highlights both the prevalence of experimental thinking, linked here more generally to the treatment effects literature, in Superfreakonomics relative to plain, old Freakonomics. He also, correctly, highlights the general rise of what one might call "experimental thinking" among economists; you will often hear applied economists suggest that one way to start thinking about a problem is to imagine the experiment that you would ideally run if you could put aside all constraints of political feasiiblity, ethics and cost.

There is much to like about Ayres' piece in my view, but I do have some comments:

First, I think Ayres overstates the prevalence of this sort of thinking. It is mainly limited to labor economists (and empirical researchers who use labormetrics tools in other applied micro fields like public finance, development and health) and largely concentrated in North America. Macro economists, trade economists, highbrow theorists and many others are not really thinking this way. In many places outside North America (and in a handful of departments in North America), structural approaches remain more popular among labor economists, relative to a reduced form treatment effects view.

Second, experiments are not a panacea nor, to quote Burt Barnow, are they a substitute for thinking. For all the reasons laid out in my 1995 paper with Heckman in the Journal of Economic Perspectives, experiments are not as simple to interpret nor always as directly informative about questions of interest as their proponents sometimes make out.

Does this mean that I think we should do fewer experiments? No. Even in the US, which is still basically the only place doing social experiments despite some cautious nibbles elsewhere, I do not think we are yet to the place where we have equated the marginal costs and marginal benefits of additional experiments. Certainly in placess like the UK and Canada, whose sum total of social experiments can be mapped onto the fingers of one hand, more experiments would be useful. What it means is that we cannot rely solely on experiments as a guide to policy or understanding. They are complements to, not substitutes for, other types of analysis.

Third, economists run two dangers with a mono-focus on experiments. First, we run the risk of losing, or maybe just slowing the development of, our applied econometric skills in regard to non-experimental data. If only because many treatments of interest - parental education, race, sex, local labor market etc. - will never be randomly assigned, non-experimental data and methods will always be with us.

Second, we run a danger of neglecting the value of economic theory in designing and interpreting the results of empirical analyses, experimental or not. A focus solely on (to use an analogy so tired it could pass for dead) black box experimental analyses of treatment effects misses a lot of broader pictures and broader questions, and means that we do not even get all that we can out of the available experimental data.

So, in my view, it should be two cheers, rather than three cheers, for the recently arrived dominance of social experiments, and treatment effects thinking more generally, in applied economics.

No comments:

Post a Comment