Monday, February 2, 2009

IES TWGs in DC

I was in DC last week for two days to attend meetings of two Technical Working Groups for evaluations being funded by the Department of Education's Institute for Education Sciences (IES). These techincal working groups include staff from the evaluation contractor and IES as well as outside experts on methods (that is my usual role) and on the subject area. The outside experiments are usually mostly academics, though they are sometimes program operators or, in the case of IES evaluations, school district or teacher union officials.

On Wednesday, I went to the TWG for the evaluation of mandatory random drug testing being done by RMC Research in cooperation with Mathematica. This evaluation is at the stage of having preliminary results (which I am sworn to secrecy about) so we discussed various statistical issues related to the analysis as well as issues of presentation and focus and some secondary analyses that it would be worthwhile to undertake. The IES page on this evaluation is here.

A very interesting issue here, which we discussed at some length, is whether the key dependent variable is any drug use or frequency of drug use. This issue might seem minor, but in fact it hinges importantly on what one sees as the point of the drug testing program. If the point is to get students down to zero use, then a dummy variable for zero use is the appropriate dependent variable to highlight. In contrast, if the point is to discourage frequent use, say by moving daily or almost daily users down to occasional weekend users, then a categorical variable measuring frequency of use becomes the primary object of interest. A dummy variable for zero use versus non-zero use completely misses changes in intensity that do not lead to abstinence.

This panel was great fun, in part because I was the only economist among the experts in attendance (some of the IES folks and the consultants are also economists). We established at the last meeting that I was the only person in the room who knew what "420" meant, which I thought was kind of amusing. In any case, the other experts on the TWG are a particularly bright and outspoken lot of psychologists and such, so the discussion was great fun and very stimulating.

On Thursday I attended the TWG for the evaluation of a treatment designed to move high performing (as measured by value added in test scores) teachers to low-performing (as measured by test score levels) schools. This evaluation is earlier in the game, as the research team is just completing a pilot study of the program in a single district, and just beginning the broader evaluation in multiple districts. Though the basic design is largely set, there was lots to talk about here as well, including how likely one thinks spillovers are from high performing teachers and how long it might take any such spillovers to show up in the data on test scores in other classrooms. This has implications for what you measure and how you measure it and also for broader issues of power.