Thursday, September 18, 2008

Public goods: What Works Clearinghouse

A couple of weeks ago I had the most enjoyable opportunity of serving on a panel whose task is to develop rating standards for the What Works Clearinghouse that is run by Mathematica Policy Research under contract to the Institute of Education Sciences of the US Department of Education. In particular, our group is tasked with assisting in the process of developing standards for studies that use regression discontinuity designs. Standards for randomized trials and for studies based on "selection on observed" variables (what statisticians awkwardly refer to as "unconfoundedness") have already been developed and I believe that there are plans to develop standards for studies using longitudinal methods and using instruments other than random assignment or a discontinuity.

I am pretty impressed with the group of which I am a member, which includes a goodly fraction of the people, both inside and outside of economics, who you would think of if you wanted to form a panel on RD designs. So in addition to being useful, the 1.5 days spent in Mathematica's DC offices were great intellectual fun. I got a chance to sneak in some questions related to a paper I am working on so there was a private intellectual payoff as well. I am also impressed by the quality of the staff that Mathematica has assigned to this effort. It is a real showpiece for them.

It seems to me that knowledge creation and diffusion are two of the few real public good that a federal Department of Education can create. In my ideal world, IES and the federal Department of Education would be close to co-extensive. The evaluations that IES funds contribute to knowledge creation; the WWC addresses the knowledge diffusion aspect.

The WWC performs three functions. First, it collects the literature on particular topics in one place. Second, it grades studies based on objective methodological criterion (with, of course, appropriate opportunities for appeal and further discussion). Third, it provides articles that summarize the graded evidence on particular topics. The inspiration for the WWC are the Cochrane Collaboration in medicine and the Campbell Collaboration in social science. The Cochrane collaboration in particular is widely viewed as being a great success both at making it easier to sort among papers by quality and also at raising quality levels in general.

Perfect? No. There is always a danger with having a single set of standards setters. If the process were somehow to get hijacked by advocates of a particular methodolgy then it could end up over-rating studies with that methodology. In this context, while there are some (as always) disagreements around the margin, there is broad consensus both inside and outside of economics on the general principals of what you want to do. So I am not as worried here about have a single set of standards than I might be in other contexts.