NOTE: The views expressed here belong to the individual contributors and not to Princeton University or the Woodrow Wilson School of Public and International Affairs.

Sunday, October 2, 2011

When it Comes to Colleges, Spare Us the Rankings (But Keep the Numbers)

Dan Fichtler, MPA


Arguably the most relied-upon college admissions advice for high school seniors comes not from a guidance counselor or an admissions officer or even a parent, but rather from the pages of a well-known magazine. The editors of U.S. News & World Report (USNWR) have just released the 2012 version of their annual college rankings (Editor’s note: Princeton #1!), and like in recent years, many soon-to-be applicants took another step towards higher education by consulting this publication to navigate the difficult decisions that lie ahead.

As has also become something of an autumn tradition, this year’s USNWR rankings were taken out to the woodshed by a wide variety of educators and pundits. The rankings have existed for nearly three decades now, and the charges laid against them are fairly predictable. Critics often cite the use of input-driven data (SAT scores and class rank of incoming students) rather than output-driven data (post-graduation salaries and percentages of alumni pursing or holding advanced degrees). Another common complaint is the ability of colleges and universities to manipulate data by changing their practices; that is, allocating resources towards those characteristics that the USNWR editors deem important, at the expense of the many other crucial elements of a college education. Anyone who has ever wondered why so many college courses are capped at the unusual level of 19 students, for example, can likely find an answer within the USNWR rankings formula.

But perhaps most objectionable is the idea that colleges and universities can be ranked in such a neat and simplistic way – or that they can be ranked at all. This criticism has more merit than any other that is regularly piled onto the USNWR editors (more on this point in a bit).

So how does USNWR determine these rankings? Well, it assigns each college and university an overall score, which is calculated via a relatively simple process. Data on a series of school characteristics are compiled, computed into common units, and summed, with different characteristics holding different weights. The schools are then ranked based upon their overall scores.

USNWR makes no attempt to hide its methodology[1], and yet anecdotal evidence suggests that very few high school seniors or their parents (or really anyone else for that matter) understand how the scores are calculated. And while an informal survey certainly cannot substitute for more rigorous hypothesis testing techniques, I sought to observe this anecdotal evidence. I asked about a dozen well-educated individuals to estimate the weight given to a school’s acceptance rate in the USNWR rankings. Answers ranged from a low of 8% to a high of 100%, with most falling in the range of 15-20%. The correct answer: 1.5%.

The table below provides the factors (and weights) used to determine the USNWR overall scores.


Methodology Used by U.S. News & World Report in its 2012 College Rankings[2]

Component

Weighting (out of 100%)


Undergraduate Academic Reputation
22.5%
          Academic peer assessment
          15%
          High school counselor assessment
          7.5%
Retention
20%
          Six-year graduation rate
          16%
          Freshman retention rate
          4%
Faculty Resources
20%
          Average faculty salary (including benefits)
          7%
          Proportion of classes with fewer than 20 students
          6%
          Proportion of faculty with highest degree in their field
          3%
          Proportion of classes with 50 or more students
          2%
          Student-faculty ratio
          1%
          Proportion of faculty who are full-time employees
          1%
Student Selectivity
15%
          SAT and ACT scores of entering students
          7.5%
          Proportion of freshman graduating in top 10% of HS
          6%
          Acceptance rate
          1.5%
Financial Resources
10%
Graduate Rate Performance
7.5%
Alumni Giving Rate
5%

Perhaps what appears to be systematic overestimation of the importance of acceptance rates comes from perceptions encouraged by the USNWR critics. Stories in the media that demonize elite universities for lowering their acceptance rates to game the rankings may lead readers to believe that this single variable plays an important role in the rankings formula. Of course, there are plenty of reasons to be concerned about ever lower acceptance rates. There are also, however, plenty of reasons why schools might seek to lower their acceptance rates, USNWR aside. To directly or indirectly attribute causal effects to USNWR for this trend without considering these other factors represents nothing more than lazy analysis.

As the critics know, it is not difficult to find flaws in this system, or at least to question the validity of the specific weights assigned to various factors. What the critics often fail to acknowledge, however, is the valuable service that the USNWR editors provide. That service is the pure provision of information in a market that has otherwise been difficult for consumers to navigate.

Choosing which colleges to apply to, and eventually which to attend, are important life decisions. Future careers, lifelong friends, and even spouses are discovered during these years for many individuals, which therefore raises the stakes on the importance of making good decisions. If we believe that high school students (and parents and counselors) are best served by having access to information on many characteristics of colleges and universities, then we must applaud those who make this information widely available and easy to process. That group includes not only USNWR, by the way, but also the editors of rival rankings systems, who created their rankings in part to rectify what they felt were flaws in the USNWR system.

The creation of new, widely circulated systems suggests that college rankings will not be disappearing anytime soon. Forbes and Washington Monthly have entered the rankings game in recent years, and the college rankings edition of USNWR continues to sell remarkably well. And despite the valid criticisms of the specific USNWR methodology, the larger question remains: can simple rankings of colleges and universities across many variables help consumers?

For example, is #12 Northwestern really an incrementally better university than #13 Johns Hopkins? And is #2 Amherst really an incrementally better liberal arts college than #3 Swarthmore? They surely are not – at least not categorically. Ranking schools on this type of continuous scale therefore makes very little sense. For some students, Northwestern is a better option than Johns Hopkins; for others, the opposite is true. The same goes for Amherst and Swarthmore.

So how do students make these decisions? They could use the wide array of data that USNWR puts forth, comparing schools on the characteristics that are most important to them, whether those are class size or graduation rate or academic reputation. Some students certainly do this. Many others, however, fall victim to the “rankings-as-gospel” syndrome. Swarthmore may be a better fit for their personality and interests, but they cannot get beyond the fact that Amherst is ranked higher, and a mismatch occurs. The solution to this dilemma is quite simple: remove the rankings, keep the numbers.

And so the critics are correct – USNWR seems to be at least partially to blame for mismatching and suboptimal decision-making. But USNWR is also at least partially responsible for the good decisions of those students who are able to look past simple rankings and delve deeper into the data. Since 1983, USNWR has provided crucial information to students in an easy-to-digest manner. For that, they deserve our (partial) thanks.



[2] Applies only to National Universities and National Liberal Arts Colleges.

No comments:

Post a Comment