The creators of the World of Labor love their standard article format very (very) much, and admit of no exceptions to it, so writing for this outlet has a bit of the flavor of writing "haiku". It also requires more effort than one might expect given the length.
I would be curious to know how much traffic they get overall and from their stated audience of policymakers and (even more so) the policy wonks who attend to them.
Other world of labor articles I have read and liked:
Last week brought the very sad news that Bob LaLonde passed away after a long illness. The Harris School provides a basic biography here; the University of Chicago statement is here. I add some reflections about Bob's personal and intellectual influence on me and, more broadly, about his influence on some of the literatures into which I have followed him.
Bob was on my dissertation committee at Chicago, along with Jim Heckman (the chair) and Joe Hotz. At the time, Bob was in the Graduate School of Business (not yet the Booth School). He proved a most valuable committee member both intellectually and personally. In particular, Bob put a lot of emphasis on understanding the details of program rules and program operation. He also led by example in terms of care with data. Personally, Bob provided moral support and inspiration when the going got tough, as it sometimes does in graduate school.
Bob's job market paper, the 1986 "Evaluating the Evaluations" paper in the American Economic Review was one of the two first "within-study designs" in the labor economics literature (the other being the 1987 Fraker and Maynard paper in the Journal of Human Resources). Bob had the excellent idea of comparing the experimental impacts from the National Supported Work (NSW) Demonstration, an evaluation of a costly and intensive intervention for four groups of disadvantaged workers, to non-experimental estimates obtained by applying the econometric evaluation technology of the time to the NSW treatment group data combined with comparison groups drawn from the Panel Study of Income Dynamics and the Current Population Survey. That paper had a major effect on the way that labor economists thought about evaluating labor market programs (though, oddly, its quite negative conclusions regarding the performance of non-experimental methods did not, at least in the short run, carry over into other substantive, and perhaps equally methodologically dubious, literatures). One could argue, though it would take much more space than I have here to do so in a compelling way, that the Bob's job market paper set the stage, in a way, for the "credibility revolution" that would follow 10-15 years later.
More practically, Bob's 1986 paper resulted in the Department of Labor's (DOL) decision to do its first experimental evaluation of an on-going program, namely the National Job Training Partnership Act (JTPA) Study (NJS). At the same time, the follow-on Heckman and Hotz (1989) Journal of the American Statistical Association paper, which provided a critique of the LaLonde (1986) paper and which attempted a partial rehabilitation of (certain) non-experimental methods via a stricter regime of specification testing, led DOL to spend several million dollars collecting "ideal" comparison group data in four of the sites in the NJS. It was the data from the NJS that I ended up using for my dissertation (and for quite a few other papers as well).
I have two published papers that directly follow-up on Bob's 1986 paper. The first of these, Smith and Todd (2005), written with my friend and graduate school (and Heckmanland, as they call it these days) colleague Petra Todd, arose as a response to the Dehejia and Wahba (1999, 2002) papers, the first published in the Journal of the American Statistical Association and the second in the Review of Economics and Statistics. These papers applied matching methods (and, in draft, inverse propensity weighting methods) to (a subset of) the data from the LaLonde (1986) paper. Though not encouraged by the authors, their work led some some less temperate researchers to conclude the matching could magically solve all problems of non-random selection into programs, regardless of the plausibility of the underlying identification via conditional independence. My paper with Petra aimed to restore a more temperate view of matching, one that recognizes the value of focusing attention on the support condition and relaxing linearity assumption as matching does while noting that matching does not solve the problem of not having the conditioning variables required for conditional independence and, as such, is more in keeping with Bob's original critique of non-experimental methods.
I wrote the second of the papers that follows directly from LaLonde (1986) with my student Sebastian Calonico. It appears in the recent special issue of the Journal of Labor Economics in Bob's honor. In it, we recreate, as best we can, Bob's data on women by returning to the raw data from the NSW and the PSID. Bob's analysis file for women was lost in the 1990s, with the result that the Dehejia and Wahba papers, the Smith and Todd (2005) paper, rely solely on his analysis file for the men, as do dozens of other papers that use it to test various new treatment effects estimators. My paper with Sebastian finds that Bob's work holds up pretty well, while shedding some new light on some of the conclusions drawn from the men in Smith and Todd (2005). Bob's consistent support of such research, even when it called into question some of his choices in writing the paper, testifies to his seriousness as a scholar. More broadly, the fact that people (and I am not the only one) are still responding to a paper published more than three decades ago provides impressive evidence of its importance.
Finally, Bob had an important intellectual impact on me as a co-author. Our first co-authorial adventure was the 1999 Handbook of Labor Economics chapter on the evaluation of active labor market programs that we co-authored with Jim Heckman. Almost all of the work on this chapter took place in the course of a four-month-long forced march in the late spring of 1999. Throughout that intense and stressful and crazy but very, very productive and intellectually exciting time, Bob kept his spirits up even as the pace began to wear on all of us. He took the lead on the very fine literature survey (a chapter within our very long chapter) and also played an active role on the remaining components, particularly the discussion of social experiments, wherein the three of us aimed to lay out a middle path between naive cheerleading for random assignment and dour rejection of the often immensely useful variation in treatment status it can provide. Our second co-authorial adventure, on a paper about testing for selection in the context of instrumental variables estimators, has been interrupted by his passing. All of the co-authors on the paper, a set that also includes Dan Black, Bob's student Joonhwi Joon, and my student Evan Taylor, already feel Bob's absence.
So rest in peace, Bob, and thanks for all that you have taught me about how to be an economist and about how to be a good person while being an economist. You'll be much missed.
[I am indebted to two regular readers of my irregular blog posts for helpful comments on a draft.]
It makes a case that this is less of an issue than one might think. It is surely the case that influence is only imperfectly measured by citation counts, and that there may be types of systematic (as opposed to classical).
On the other hand, what the author does not emphasize is that a more important question is the fraction of research activity that would not pass an ex ante social cost-benefit test if the researcher's private incentives (e.g. getting tenure and/or raises) were omitted from the calculation. I suspect that there is quite a lot of such research.