Saturday, August 18, 2012

Subsidizing economics

Gary Becker and Jim Heckman write in the Wall Street Journal in support of subsidies to economic research, with reactions from Tyler Cowen and from John Cochrane.

My reaction is pretty close to John Cochrane's, though I would add a few things:

First, we should do some research on the effects of subsidies to economic research. What matters for the policy question at hand is not that NSF or NIH funded research that was useful, but the responsiveness of the quantity of such useful research to the volume of subsidies. If the hiring, promotion and tenure practices in top universities are working as they should, my prior is that the relevant elasticity is pretty low. Robert Moffitt is not going to run off and do corporate consulting because he does not get an NSF grant or an NIH grant.

Second, I concur with the point about data. Data are a public good. Existing data sets could be better documented and easier to use. The systems that structure researcher access to restricted data at e.g. the Bureau of Labor Statistics are oddly designed and overly bureaucratic. There is no good reason, for example, for either the BLS or the Census Bureau to even consider the content of the research being done. Instead, they should simplify certify that the researcher is a serious scholar and that adequate security is in place, and they should do so not on a project-by-project basis but once for each researcher and institution. There is much to improve here, some of which, such as better turn-around time on clearing results obtained using restricted data, that would merit additional funding. Other improvements, such as removing BLS and Census review of the substance of the research, requires legal changes. There are also more substantive improvements to existing data sets, as well as new types of data, that would merit government funding.

Third, the government can and should fund research evaluating government policies, which is, as it happens, is a big chunk of applied economics. Data collected for such research, as with the data from the National Supported Work Demonstration and the National JTPA Study I have used in my own work, often have huge research spillovers both substantive and methodological. And, of course, the government has a fiduciary duty to the longsuffering and much-abused taxpayer to ensure that tax money is spent only on programs with a solid evidentiary foundations. Much of the research that led to welfare reform in 1996 was not funded by NSF or NIH but rather grew out of a requirement that states do experimental evaluations of their welfare programs in exchange for the freedom to depart from federal program guidelines. That strategy should be repeated in other contexts and, more broadly, the government should spend relatively more money evaluating policies in serious ways and relatively less on the policies themselves, except for the (very, very small) subset of policies that already stand on serious evidence.

Fourth, the government can create useful variation. Policies can be designed in ways that make them relatively easy to evaluate (staged rollouts with rollout timing chosen at random, enforced discontinuous eligibility cutoffs, high quality administrative data) or in ways that make them hard to evaluate (nationwide roll-out at the same time, low quality and/or inaccessible administrative data). A clever policy design can generate a lot more useful knowledge at the margin than your average research grant. And, moreover, clever policy design is pretty much free and quality administrative data help program operations as well as evaluation.

Fifth, the federal government can support institutions that provide incentives to improve research quality. I have in mind here the What Works Clearinghouse that is funded by the Department of Education and operated by Mathematica Policy Research. The WWC has had a tremendous positive effect on the methodological quality of research in education, at a pretty low cost.

In short, while there are many margins in which the government can and, I would argue, should spend money and policy effort on economic research, I would argue that simply increasing the economics budgets at NSF and NIH is not the optimal strategy.

1 comment:

David Barker said...

I'm not sure that data are a public good. They are not exactly public, since they are excludable - for example, I can publish a mean and keep the data secret. Consumption of data is often rivalrous, because of first mover advantages - if I publish a paper using a data set, others cannot publish the same paper.

But the real question is whether government collected data are a good. I think a case can be made that overall, government data have caused more harm than good. For example, I agree that data helped make the case for welfare reform, but before that data were used to push for overly generous welfare programs in the first place.

I wrote a short post about the possible downside of macro data here:

http://barkerecon.blogspot.com/2011/10/tmi.html