A woman walks on the Columbia University campus, Monday, March 9, 2020, in New York. U.S. News & World Report has unranked Columbia University from its 2022 edition of Best Colleges. The publisher said in a statement Thursday, July 7, 2022, that the Ivy League institution failed to substantiate certain 2021 data it previously submitted, including student-faculty ratios and class size. (AP Photo/Mark Lennihan, File)

Every year, journalists, university administrators, and high school students keenly await the release of the U.S. News & World Report college rankings to see which schools have moved up or down on the lists. This year, the big news was that Columbia University dropped from 2nd to 18th on U.S. News’s ranking of national universities—not for any measurable decline in its quality, but because it had fudged the data it submitted, a fact the university itself now admits.

Columbia isn’t the only school that has recently been caught sending the magazine phony numbers. This summer, U.S. News removed Villanova University from its “Best Value” list for “misreporting” its data. An internal University of Southern California review this past spring confirmed a similar U.S. News numbers-boosting scheme at its education school, and a Temple University business school dean is now serving time in prison for such activity. Other colleges, including Tulane, Claremont McKenna, Emory, and the University of Pennsylvania, have been engulfed in similar scandals over the past decade.

The common source of these controversies is the way U.S. News puts together its metrics. While just about any ranking, college or otherwise, can be gamed, U.S. News relies heavily on proprietary surveys and self-reporting, which gives colleges an especially large incentive to cheat.

The most famous and infamous is a “reputational survey” the magazine sends to college presidents and provosts asking them to rate their fellow colleges. This survey suffers from poor response rates (down to 34 percent this year from 48 percent a decade ago), and leaders have gamed the results for years by rating themselves highly while considering everyone else to be awful.

U.S. News also asks colleges to follow something called the Common Data Set (CDS): definitions and guidelines meant to standardize the reporting of data on class sizes, student-faculty ratios, and a host of other metrics that the federal government does not collect. It’s a worthwhile effort, but compliance is voluntary, and while most colleges participate, some don’t—including, until last month, Columbia University. Moreover, it is hard for U.S. News to validate the accuracy of the data that colleges report.

In addition to the main list of best colleges, U.S. News also ranks professional programs such as law, business, and education schools. All the data for these rankings comes from surveys that U.S. News sends out to the programs, as the federal government collects little information about particular graduate programs of study. Professional schools are under especially immense pressure to climb up the U.S. News rankings. Not surprisingly, these programs are where a lot of the scandals are happening.

The chances of being caught are small because it’s hard, if not impossible, for U.S. News to confirm the data independently. That’s why evidence of misconduct tends to surface only if a school’s numbers appear so implausible that someone launches an investigation or when a university insider blows the whistle, as in Columbia’s case. Since blowing the whistle is itself risky, the likelihood is high that there is a lot more misreporting happening at a lot more colleges than has so far come to light.

The way for a magazine to minimize the chances of manipulation and misreporting is not to rely on hard-to-check proprietary data and instead base rankings on information the federal government collects. That is how the Washington Monthly puts together its annual college rankings—and, not coincidentally, why it has not been embroiled in the kinds of scandals that have bedeviled U.S. News. Federal data is, by definition, public, so easier for outside observers to check. And even though the feds rarely penalize colleges for sending in flawed data, schools know the risk is there. 

That doesn’t mean colleges don’t still try to manipulate the numbers they report to Washington. For instance, to make it seem as if they are serving more lower-income students, selective colleges juke their stats on Pell Grant admissions by letting in lots of students who barely qualify financially. Or they game graduation rates by starting such students in the summer or spring instead of in the fall cohort that the federal government typically uses to calculate graduation rates. But there are ways of correcting for that behavior. For instance, the Washington Monthly calculates actual versus predicted graduation rates that adjust for student characteristics and household incomes. We also use a newer federal graduation rate that includes part-time and transfer students.

Moreover, some of the data the Washington Monthly relies on most—such as how much students earn after they leave college or the degree to which they are paying off their student loans—can’t be manipulated by colleges because the federal government collects it independently via student loan and tax records. Other federal data the Monthly uses, such as the number of ROTC students at a college or the percentage of work-study slots it devotes to community service jobs, can only be “gamed” by colleges doing more of what we want them to do, like encouraging students to serve the country.

Why doesn’t U.S. News base its rankings solely on public data, as the Monthly does? The answer is that the federal government doesn’t collect the numbers U.S. News needs to calculate the qualities its rankings are meant to reward: a college’s wealth, prestige, and exclusivity. As long as those are the values U.S. News chooses to define college excellence, its rankings will be subject to being played.

There is something that Washington can do, however, to partially save U.S. News from itself. The Common Data Set elements like class size and student-faculty ratios are of sufficient interest to the public that the federal government should consider adding them to their annual data collection from colleges. It should combine that with occasional audits of data that colleges provide. Those two steps would add a measure of integrity to the college rankings that millions of prospective students and their families rely on—and perhaps spare the rest of us from having to read so many stories about colleges pulling the wool over U.S. News’s eyes.

Our ideas can save democracy... But we need your help! Donate Now!

Paul Glastris is editor in chief of the Washington Monthly.

Robert Kelchen, a professor of education at the University of Tennessee, Knoxville, is data manager of the Washington Monthly college guide.