Sunday, January 11, 2009

Mom, I want to be an evaluator

I like this post from Chris Blattman a lot!

Here are some thoughts:

First, it provides good advice on what human capital you should accumulate if you want to be an evaluator. I would just add a bit more refinement to Chris' recommendations. If you want to do evaluation in development, find a place that is strong in micro-development and in labor. Princeton and Michigan are examples here. If you want to do domestic educational evaluation, find a place that is good in labor and/or education and, ideally, where the economists talk to the ed school people and the ed school people are worth talking to. Harvard and Michigan come to mind here. I would second his recommendation regarding obtaining some work experience in the real world of evaluation but would broaden it to include working at an evaluation consulting firm like Mathematica, Abt or RTI.

Second, it really is true that getting a reliable impact analysis is just the first step. There is always interest in improving even the most effective programs (where effective programs are a rather modest subset of all existing programs) by tweaking the design. There are also important issues of portability. For example, just because Progresa "works" in Mexico does not mean that a Nicarguan or Brazilian version will obtain the same results. Knowing how and why a program produces the impacts it produces makes addressing these questions of program modification and program replication in other contexts a much surer business. Of course, as a byproduct learning about how and why programs work adds to our general stock of social science knowledge in a way that isolated black-box impacts estimates do not. This point is emphasized in, e.g. the Heckman and Smith (1995) piece in the Journal of Economic Perspectives.

Third, I would note that economists sometimes forget that there is more to evaluation itself than obtaining credible impact estimates, difficult though that often is even within the context of an RCT. If you pick up one of the standard non-economics texts on evaluation, such as the nice book by Shadish, Cook and Campbell, you will find that surprisingly little space (probably too little, but that is a different post) is spent on impact evaluation. What you find are are sections on topics such as implementation fidelity (does the program as implemented match the program as designed) and process evaluation (how does the program operate in practice) that economists tend not to talk too much about.

Finally, cost-benefit analysis, which in economics traditionally falls under the heading of public finance, is also an important part of a serious evaluation. Very few evaluations do a very good job with this; a useful exception in the labor economics world is Mathematica's experimental evaluation of the Job Corps program. Even if your career focuses on impact evaluation, having a solid foundation in the other facets of evaluation will allow you to do your job better. You will be able to interact more effectively with people from other backgrounds and you will find that, for example, the output produced under the heading of a process analysis is very valuable in the course of an impact analysis. An excellent examples of a (relatively quantitative) process analyses in the labor economics context is the Kemple, Doolittle and Wallace (1993) report produced as part of the National Job Training Partnership Act (JTPA) study.

Hat tip: marginal revolution