There are two perspectives on this topic. One suggests that the answer lies within Johnny—he doesn’t study hard enough, he was ill-prepared for college by his earlier education, or he even lacks the aptitude for college-level work. This view is favored by the growing number of reform-minded critics who suggest that college is being oversold, that both the country and Johnny would be better served if he chose some other path to start adulthood than a traditional four-year college education.
The other suggests that Johnny is failing because of circumstances beyond his control—he lacks the financial ability to study full-time, or the government is not providing enough resources to educate him properly. This is the viewpoint usually held by members of the political establishment who claim we need to produce ever-increasing numbers of college graduate to remain competitive in the global economy.
A new study published by the National Bureau of Economic Research, “Why Have College Completion Rates Declined? An Analysis of Changing Student Preparation and Collegiate Resources,” attempts to provide an empirical argument for this latter perspective. It suggests that the role of declining financial resources devoted to some sectors of higher education has been overlooked when it comes to low graduation rates. Conversely, it suggests that the effect of an assumed increase in the number of college students with less-than-stellar academic skills has been exaggerated.
The authors are three labor economists: John Bound of the University of Michigan, Michael Levenheim of Cornell University, and Sara Turner of the University of Virginia. They used data from the high school graduating classes of 1972 and 1988, and followed each of them for eight years. They then divided higher education into five categories: prestigious private colleges, top-50 public universities, less prestigious private schools, lower-tier public institutions and two-year community colleges, and compared the completion rates for each cohort and for each type of institution with a variety of statistical techniques.
While graduation rates rose at the private schools and the top-50 public universities, most students start their college careers at either the lower-tier public universities or at community colleges, and at these less selective schools graduation rates plummeted in the time period of the study.
The authors concede that students’ lack of preparation (measured by math test standardized test scores) is the key when it comes to low completion rates at community colleges. But the trio contends that for the less selective public four-year schools, the decline in graduation rates is due instead to a decrease in the educational resources available. (The primary measures they use for resources are the student-faculty ratio and per-student expenditures). Their argument is not as clear-cut as they presume—certainly not enough to use as a basis for policy decisions.
The reason for both trends—more unprepared students and lower resources—is, according to the authors, the increase in the percentage of high school graduates going to college. This increase was enormous: 48.4 percent of students graduating from high school in 1972 attended college, at least briefly, within two years, while 70.7 percent of those graduating in 1988 did so. It is not unreasonable to expect that many of these additional students in the later cohort were less prepared than their counterparts in the earlier cohort. The authors agree that this is the cause for the drop in graduation rates at the community college level.
But the authors also argue that the high volume of additional students swamped the lower-tier public institutions, causing states to fall behind in funding. As a result, average student-faculty ratios went up from 25.5 to 29.1 and average per student instructional expenditures fell from $5,331 to $5,102 at the less selective public universities. According to the authors, this drop in resources, rather than a lack of preparation by students, is why the graduation rate at these schools dropped from 61.8 percent to 56.9.
And their solution? There is “a need for more attention to the budgets of these institutions from state appropriations and tuition revenues.” In other words, more money in order to produce more college graduates.
One major problem with drawing such a strong policy implication from this study is that the trends that existed in that period of the study no longer hold. For instance, if students who began at community colleges are excluded, the six-year overall graduation rate actually increased in that period, from 64.2 percent to 67.9 percent. Rates at top-50 public universities and all private schools improved considerably, enough to offset the drop at the non-selective four-year public schools. Since then, however, the graduation rate for students starting at four-year schools has declined dramatically to approximately 58 percent in 2009.
And when only non-selective four-year schools—both private and public—are examined together, the pattern of resources as the dominant factor in graduation rates becomes less stark. Both types of schools experienced a similar small increase in student-faculty ratios, the authors’ most important measure of a decline in resources, but they differed dramatically in graduation rates. The lower-tier public universities graduation rates fell by 4.9 percent, but at the less selective private schools they rose by a remarkable 12.3 percent, from 58.2 percent to 70.3 percent.
The discrepancies in resource measures between the two types of schools for the later cohort are small: a student-faculty ratio of 29.1 and a median per-student expenditure of $5,102 (in 2007 dollars) for less selective public schools, and 25.7 and $5,269 for the less selective private schools. It is hard to believe that these differences would account for a difference in graduation rates between the two groups of more than 13 percent, as the authors suggest.
Their model does not include anything that signifies the considerable cultural changes that occurred in the sixteen years between the study’s two cohorts of high school graduates. Given such elements as the steady grade inflation occurring in American universities since World War II, or the creation of many new undemanding majors in the social sciences, it is possible that it is becoming easier to graduate from non-selective schools in general. And if that were the case, the results of the study fail to reflect this possibility.
The main thrust of the authors’ argument depends on the creation of a “counterfactual”—how students from the 1970s cohort would perform if they attended school under the same conditions as the group from the 1990s. To do so, the authors ran a logit statistical regression for the 1990s cohort. (A logit regression can be used to estimate the probability of an event—in this case, graduation.) They then used the coefficients produced by that regression with data from the 1970s cohort. According to the authors, students at less selective public schools who had the same preparation characteristics (math test scores) as the 1970s students would have graduated at the same rate that the actual 1990s students did.
But when they employed the same technique to test for resources (student-faculty ratio), they found that their model predicted very different graduation rates for the two cohorts. To the authors, this was evidence that resources have a greater influence on graduation rates at non-selective public schools than does student preparation. The situation was reversed for the community colleges—for them it was the students’ preparation that mattered, not resources.
While this is an interesting exercise using econometrics to gain better understanding of how things work, it hardly qualifies as undeniable proof of the authors’ conclusions. They even admit that there is a mathematical problem with their use of the logit model in regard to this particular set of data. To describe it simply, when somebody has very high or very low math test scores, other factors don’t matter that much when it comes to predicting graduation. With SAT scores in the middle, perhaps between 900 and 1150, then other factors such as resources become important. And that is exactly where most of the non-top 50 public schools are. Because of this, the Bound, Levenheim, and Turner regression verly likely overemphasizes resources at these schools as a factor.
Furthermore, the authors also acknowledge another fundamental problem with their approach. They state that “a particular concern is that both test scores and student faculty measures are imperfect proxies for the constructs [preparedness and resources] in which we are interested,” and “the general effect of such errors in measurement is to bias downward the effects estimated in the simulations.” This is particularly important, since it suggests that the model downplays the effects of preparation. If preparation is possibly more significant statistically than the authors’ research shows because of this bias, how can the conclusion that preparation doesn’t matter be made with confidence?
Even if the authors’ main point were to be correct—and that is by no means certain—their policy suggestion about the need to increase resources shows that they still perceive the trade-off between resources and graduation rates as one-sided. They seem to ignore one of the most fundamental laws of economics, that resources are scarce. If lower-tier schools were getting less resources per student in the 1980s and early 1990s, it was probably because the other resources were more efficiently used elsewhere in the economy. Giving those resources to higher education might have been good for graduation rates, but bad for the country as a whole.
The battle is just beginning between those who believe that the fundamental solutions to many of society’s problems lie in expanding higher education and those who believe that, given our current economy and the current state of mankind, higher education’s potential has been exaggerated. There will be more studies produced on both sides of the political divide, and observers should perceive them skeptically.
After all, when statistics are tortured sufficiently, just as with human beings, they are likely to confess to anything their inquisitor wishes to hear. But one statistic stands out in the Bound, Levenheim, and Turner study: for the bottom quarter of college students in the 1990s cohort (according to math scores), only 11.4 percent graduated within six years. That number has not likely increased, and perhaps has fallen to single digits. It is to nobody’s benefit to encourage more such students to enroll, and fail.