Commentaries
Pick a Number, Any Number

A study that tries to determine the economic impact of the University of North Carolina system takes too many liberties to provide meaningful estimates.

By Jay Schalin

Comments

July 24, 2009

How important is a university or university system economically to its state or region? Can such a thing have a simple number put on it?

Policy makers and academic officials often want some formal, quantitative measurement of the favorable impact their institutions have. After all, it is easier to support their arguments for additional funding using quantified data than by saying “trust me on this.”

Thus, the “economic impact study” has become commonplace in academia.

Earlier this year, N.C. State University resource economics professor Michael Walden produced a study that attempts to quantify the impact of the University of North Carolina (UNC) system. Walden is widely sought after for his opinions by the local media, and he also previously prepared a paper on the North Carolina workforce for the university system’s UNC Tomorrow Commission. UNC system president Erskine Bowles has praised this newer study, “Economic Benefits in North Carolina of the University of North Carolina Campuses,” and distributed it to his staff.

The central idea of these studies is to produce an estimate of the overall returns to the money spent on the university system. In other words, they attempt to provide an estimate of how much economic activity is produced by each dollar the state spends on the university system—“for each dollar spent on the universities, the economy grows by X dollars." These returns can be “private”—captured by state residents as increased income or by businesses through increased sales—or they can be public, in the form of increased taxes (this explanation is an extreme simplification of the returns to education).

The methodology Walden employed is fairly standard for academic economic impact studies. Yet that methodology makes too many general assumptions and dismisses too many real-life dynamics to produce an accurate estimate of a university’s effects.

Walden readily admitted that his study cut a lot of corners, for a variety of reasons. He suggests that, because of these shortcuts, his impact estimates are likely to be understated. While he is right in this assessment, for other reasons his figures are likely to be overstated as well—and by such magnitudes as to render them meaningless.

One of the most glaring errors of this study is how Walden derives the expected increase in lifetime income received by UNC graduates as a result of their attendance. He defines this expected increase as the difference between the average lifetime income of UNC graduates and the average of state residents who have the next lowest degrees. (For bachelor’s degrees, the next lowest degree is a high school diploma. For master’s degree-holders, the next lowest is a bachelor’s degree, and so on.)

For example, he found the average salary for mechanical engineers who graduate from N.C. State University, and from that figure he subtracted the average income of high school graduates to arrive at the annual increase in salary due to a UNC education. He did so for all majors, at all fifteen schools in the system (North Carolina School of the Arts was left out).

He used as his graduates all members of the Class of 2004 who remained in the state four years later. He assumed that all of these graduates will work to age 67, in order to be fully vested in Social Security.

He also computed the combined cost of the education for the Class of 2004—tuition, fees, books, and supplies, plus the amount of income forgone while attending school—and subtracted it from the lifetime earnings. All of this was aggregated to a single number for all schools, with various computations applied to account for changes in income over the course of a career. What the study says is that the Class of 2004 at UNC-Chapel Hill will earn roughly $1.4 billion more over their lifetimes because of the school’s existence (He also uses a multiplier for indirect benefits that boosts that figure to $1.7 billion). The total for all fifteen UNC schools was $7.4 billion (boosted to $8.9 billion).

This is the standard procedure for university economic impact studies. And as a result, they invariably produce misleading conclusions. Walden even acknowledges one of the method’s most serious flaws—that many of the people who graduate from universities possess natural abilities that would permit them to earn more than other high school graduates, whether they got a college education or not. It is essentially impossible to identify how much of their higher incomes are the result of their innate abilities, and how much is due to their college education. If it were feasible to factor out the effects of graduates’ natural abilities, the $7.4 billion and $8.9 billion figures would be significantly lower.

But that is not the study’s only overstatement. An underlying assumption of the paper is that if the UNC system did not exist, then none of its students would go to college elsewhere. This is patently false. If UNC did in fact disappear tomorrow, many people who now attend UNC schools would attend private colleges if there were no public universities, since increased education often translates to increased income.

Public universities, due to their state subsidies, are able to offer an education that is fairly close in quality to the private colleges, for a fraction of a cost to students and their families. It is only natural that people who could afford an expensive private education would opt for a nearly equivalent public education that is much less expensive. The numbers bear this out: at UNC, only 12 percent of the student body qualifies for financial aid (14 percent at N.C. State). Obviously the majority of students at these two schools could attend private schools if that were their only option.

Indeed, a great many of the students in the entire UNC system would be likely to attend private schools if a high-quality public option did not exist With this in mind, it becomes apparent that Walden’s estimates are grossly inflated.

Walden also makes a very fundamental error in calculating the estimate of the returns to state investment in higher education. He takes the total of all additional income earned by UNC graduates as a result of their educations ($7.4 billion for the low-end figure), and divides it by the amount of state university appropriations for the Class of 2004. He then suggests that the resulting quotient is the return to the state’s economy for each dollar the state spends on higher education (this return, in its simplest form, is $9.65).

But his formula does not take into account all the other investments in higher education—tuition, fees, endowment spending and federal grants. If person A invests one dollar, and person B invests one dollar, and the return is four dollars, it cannot be claimed that the return for a one-dollar investment is four dollars. Yet that is precisely what Walden’s study does, even though tuition, fees and grants account for much more investment in the UNC system than the state’s appropriations.

It is also unrealistic to assume that all UNC graduates who are in the state four years after they graduate will remain here working until the age of 67. Many will move elsewhere, and many will cease working long before their expected retirement age.

Another major problem with this study is that it ignores the sort of “marginal thinking” that forms the foundation for much of economics. People usually spend their money on the most important things first—food, clothing and shelter come before Ipods and yo-yos. It is the same in higher education—the first dollars allocated are likely to subsidize serious students and serious things, while the last dollars pay for slackers and unnecessary frills such as rock climbing walls in the gymnasiums.

By providing a single number for the return on investment, Walden makes no mention of the certainty that the return for the first dollar spent is different than for the last dollar spent. This is akin to saying that the likely return to the top student in the system—a brilliant, hardworking kid who majors in a scientific or technical field, completes school ahead of time and goes on to either graduate school or a good position in industry—is the same as the return for an indifferent student of less than average aptitude who drops out with bad grades after several years.

Obviously, the state derives a great many benefits from the university system. But it also sacrifices to do so. Something that impact studies like this never ask is what the impact would be if the tax dollars used to fund some of the UNC system’s marginal activities were returned to the state’s residents, and permitted to circulate or accumulate as capital. It is possible that the money left in the hands of residents might cause even greater growth.

Walden also left out several other major components of the university system likely to affect the economy from the study. One is the university system’s large service mission, the other is the entrepreneurial returns from research. The service mission’s effect is not intuitively predictable. Some facets, such as the Small Business and Technology Development Center or the various technology transfer offices that help faculty with the patent process, might very well have a positive effect on the state’s economy. But many others are social service activities that provide health care or “leadership development” for teachers that are likely to be a drag on the economy.

And even after conceding to Walden that university research activities have a positive effect on the state’s economy (it is not, however, a certainty), there is another question raised: if an estimate is grossly overstated by several large but unknown magnitudes, and it is understated by several large but unknown magnitudes, and if the methodology includes some suspect techniques, then exactly how accurate can that estimate be? At what point does an estimate become just a guess, or even wishful thinking?

These studies tend to make universities or university systems appear to be limitless sources of prosperity. If this were the case, then all economic problems could be solved by state investment in higher education—try to imagine any other investment that returns $17.22 for each dollar invested (that is Walden’s high-end estimate of the sum of all public and private returns). Yet, despite the tremendous current investment in higher education already, we are not all driving Porsches or living in mansions.

It is understandable why policy makers would want to have a single number that they can point to as the return for a dollar of investment in the higher education system. Walden’s study serves that purpose, but only if the intention is to use the number for a sales pitch instead of thoughtful analysis. Sometimes no number is better than an unrealistic estimate.

 


Please observe the Pope Center's commenting policy.


blog comments powered by Disqus

Return to the Commentaries Archive

Copyright © 2016 The John William Pope Center for Higher Education Policy | Site Map

Website design and development by DesignHammer Media Group, LLC. Building Smarter Websites.