Thursday, September 10, 2009

College completion rates

The NYT wades into the discussion about completion rates with this article inspired by the new book by Bill Bowen, Mike McPherson and doctoral student Matthew Chingos. Full disclosure: I know McPherson a bit (and think very well of him) and I've heard Bowen give an after-dinner talk at the NBER Education meetings (and read his earlier book, the Shape of the River, that addresses affirmative action in higher education).

The article again illustrates the hard life of the newspaper reporter, who has to write about new subjects on deadline, with a skill set that may not include much that is helpful in particular contexts.

The first key problem with the discussion is that the socially optimal graduation rate from four year schools does not equal one, though the article implicitly suggests that it does. College is an experience good and some students who, optimally, give it a try, will learn that it is not a good match for them and, again quite optimally, drop out.

Moreover, having that learning occur at second-tier schools may be preferred to having it occur at top-tier schools because second-tier schools cost society less to operate. Thus, it may be optimal to have higher dropout rates at what researchers in this area call "directional schools" like Western Michigan and Eastern Michigan rather than at places like Michigan. A student who does well at a directional school can always transfer up.

The second issue is that graduation rates vary over time both because of the numerator and the denominator. For example, the graduation rate might fall at a particular state university because the state introduces a new scholarship or loan program and so induces some additional students at the margin to experience the experience good of unversity education. If students at the margin of giving college a try are less likely to finish, then this will lower the graduation rate even if the university's behavior does not change at all. This, of course, suggests that unadjusted graduation rates do not make a good performance measure even ignoring the fact that the optional rate is not one and probably not even close to one.

Third, the article leaves out any mention of post-secondary alternatives to college, such as vocational training. High dropout rates from directional schools may imply not that they are failing but rather that students who should be getting vocational training are instead being pushed into college programs for which they are poorly suited via policies based on the (quite deeply, I think) mistaken idea that everyone should get a university degree.

The article also notes that:
Congress and the Obama administration are now putting together an education bill that tries to deal with the problem. It would cancel about $9 billion in annual government subsidies for banks that lend to college students and use much of the money to increase financial aid. A small portion of the money would be set aside for promising pilot programs aimed at lifting the number of college graduates. All in all, the bill would help.
Now, how exactly do we know that this will help? Changing who issues student loans does not change their terms and so should have no effect on graduation rates. We are not told the nature of the promising pilot programs nor, more importantly, whether they will be evaluated in a serious way, e.g. with a random assignment experiment, or in a non-serious way, e.g. with a before after comparison and/or "stakeholder" interviews. I suspect the latter, in which case this is just money wasted. A bit of skepticism here would help the NYT reporter.

And then there is this bit:

About half of low-income students with a high school grade-point average of at least 3.5 and an SAT score of at least 1,200 do not attend the best college they could have. Many don’t even apply. Some apply but don’t enroll. “I was really astonished by the degree to which presumptively well-qualified students from poor families under-matched,” Mr. Bowen told me.

They could have been admitted to Michigan’s Ann Arbor campus (graduation rate: 88 percent, according to College Results Online) or Michigan State (74 percent), but they went, say, to Eastern Michigan (39 percent) or Western Michigan (54 percent). If they graduate, it would be hard to get upset about their choice. But large numbers do not. You can see that in the chart with this column.
This bit treats graduation rates as structural parameters that do not vary across persons. Does anyone really think that the graduate rate of someone who chooses to attend WMU instead of Michigan would be the same at Michigan (or at WMU) as students who make the opposite choice?

Finally, the jewel in the crown:
Last year, even in the grip of a recession that has spared no group of workers, the gap between what a college graduate earned and what everyone else earned reached a record. Workers with bachelor’s degrees made 54 percent more on average than those who attended college but didn’t finish, according to the Labor Department. Fifty-four percent — just think about how that adds up over a lifetime. And then think about how many students never cross the college finish line.
Let's think about what this paragraph does. It treats a simple mean difference as a causal effect. There is a giant (really, giant) literature in labor economics devoted to establishing, beyond any reasonable doubt, the untruth of exactly this proposition. How can the author not have learned that in the course of researching the article? Moreover, even if the mean difference were an average treatment effect, it is almost surely not the treatment effect that is relevant to students at the margin of choosing to go to college or not. There is an equally large, but less definitive literature here, as it appears that the effects of college at the margin vary by margin and by quality of college attended. But I don't know of any papers that suggest it is anywhere near the difference implied by the simple comparison of means.

To recap: the article suffers from major conceptual errors, leaves out key policy dimensions, praises pending legislation with no good reason, confuses simple mean differences with treatment effects and confuses average treatment effects with treatment effects on individuals at the margin.

Sigh.