Wednesday, September 15, 2010

Q: In what ways are the US News & World Report rankings for colleges flawed?

About a quarter of the U.S. News formula is an opinion poll of university administrators (presidents, provosts and deans) and high school college counselors about their views on the reputations of the colleges.

One criticism would be, does this really speak highly for the validity of the results, when 23% of the result comes from administrators at competing universities and high school employees? Does a 45-year-old guidance counselor at Evanston high school or a 60-year-old dean at the University of Chicago really have any idea whether you'll get a better undergraduate education at Stanford, Harvard, Penn or Yale if you go there in 2011?

And, of course, can a national university really have a single, unitary reputation score? Surely the kind of student who would thrive at Caltech (the #1 school in the country a decade ago, despite offering no BA degree) is not the same as the student who would thrive studying medieval literature at Yale.

But #2, like all components of the U.S. News formula, there is no margin of error on the results of the opinion poll! The rankings are calculated as if every input -- the competitor and high-school employee view of a school's "reputation," its graduation rate, the average class size -- were absolutely certain. That is not so. 

In addition to statistical error, there's also a substantial systematic error in some of the parameters -- e.g. the "average class size" has a lot of slop in what you count as a class (just lectures? lectures and discussion sections? lectures, discussion sessions, and tutorials?). So does the graduation rate, etc. These figures should have error bars on them too.

I have discussed this briefly with Bob Morse, the guy at U.S. News who calculates the rankings, but he wasn't receptive to the idea that they should put appropriate error bars on all the inputs and propagate the uncertainty to the outputs, marking statistical ties as appropriate. (I suspect these statistical ties might cross substantial swaths of the final rankings, which may partly explain why U.S. News wouldn't be excited to try to sell magazines with that technique -- who wants to announce a nine-way tie for 1st place?) His position was that they assume the data coming from the schools is right, and they don't waste time worrying about what the rankings would be if the supplied figures weren't right.


Note: Only a member of this blog may post a comment.