College rankings have an enormous influence on the decision-making of millions of students and their parents, and some of the unwarranted assumptions we make about their importance contribute strongly to the epidemic of stress we are seeing among teenagers today. The constant anxiety associated with never knowing whether they’ve done enough to be admitted to a “good school” is really, really bad for each individual student and, subsequently, for society as a whole.

It’s time for these rankings to evolve.

The dominant force in the college rankings world is, of course, US News and World Report (USNWR). The data they gather annually is useful for making decisions when choosing a college, but our misguided reliance on their supposedly one-size-fits-all ranking is problematic. People are not only stressing out about differences that are much smaller than these rankings make them appear, but are also doing so based on someone else’s idea of what we should think is important in a college. Have you ever looked at the criteria and weightings they use?

[Note: All references to the USNWR ranking use the 2021 edition of their ‘Best Colleges’, which is the last edition not affected by the pandemic.]

6-year graduation rate

College administrators’ opinions

Spending per student

Class sizes

Faculty salary

Social mobility

Student indebtedness

Test scores

First-year retention

Professors with highest degree

Alumni giving rate

High school class standing

Faculty/student ratio

% of faculty who are full-time

25.6%

20%

10%

8%

7%

5%

5%

5%

4.4%

3%

3%

2%

1%

1%

I doubt there’s a single student in the history of college admission who would have come up with this if starting from scratch, yet millions of people treat the resultant ranking as if it were written in stone when thinking about which college students might want to attend.

There’s no need to analyze each criterion USNWR uses. A closer look at just the two most heavily-weighted factors plus the overall scores should convince most that it’s not what they’d choose if doing it on their own.

The 6-year graduation rate is a good measure of how much students like a school and how good a job that school does moving students along toward graduation without delays. For various reasons, it’s desirable to be in a place where the vast majority of students stay for their entire education and finish in a reasonable amount of time. But practically speaking, how many of you really think it should be 25% of your decision? If you have total confidence you’re going to graduate within the customary four years (or perhaps five for some STEM majors), why would you base a quarter of your decision on this number? Five to ten percent I could see perhaps, but with so many other important factors to consider…..25%?

And as to the opinions of college administrators, sure they know more than most of us do about their competitors. But no single person has intimate knowledge of more than a few schools, so much of their opinion is likely based on hearsay, conjecture, and perhaps even the very rankings they’re helping to create! Another weakness in this measure can be found in the 5-point scale they use to do the ratings. It would be foolish to argue that Princeton and Harvard haven’t earned the 4.9 they receive, and one wonders who the heck gave them lower than a 5 to bring them down to that 4.9. But why would you want to use a measure for which–with over 2500 4-year colleges in our phenomenal higher education system–half of the administrators doing the rating believe the cutoff between a score of 5 and a score of 4 should be made at schools like Northwestern, Duke and Brown (all of which receive an average score of 4.5)? Would you really give this 20% of the weight if designing your own ranking? Would you even give it 10%?

The overall scores are perhaps the biggest problem with the USNWR ranking. It makes total sense, of course, that Princeton, Harvard and Columbia would get scores that equate to an A+ on a 100-point grading scale (to which students and their parents will naturally compare these scores), but does anyone really believe that UC Berkeley and the University of Michigan are a B- by comparison, that Boston U and the University of Texas are worthy of only a C-, and that Brigham Young and the vast majority of other colleges are failures with scores below 65? Any of the above-mentioned schools are capable of providing an excellent education to any student whose desired major they offer.

The other factors USNWR uses also all have some value in making a decision, but some you would likely not use at all or would give different weight. So why do you continue to accept what they tell you should be important to you and how important it should be? Until they (or someone else) offer you the data and the means to choose from dozens of criteria and weight them however you like, why don’t you put together your own ranking using the available data? It’s too big a decision to have it not be based on what’s important to you, and it’s worth whatever time you need to take to make it personally relevant.

I decided to do this.