There are good reasons to believe that, for example, selectivity of admissions or graduation rates are some kind of quality signal. But as soon as it becomes strategically important to 'score well' on those metrics it becomes well worth universities doing things that will improve the metric without improving the quality - such as rejecting qualified applicants or graduating those who ought to have failed. And now the quality signal is much less useful if you're simply trying to use it to understand what's going on.
Perhaps someone at Harvard or MIT can chime in with whether such contentions exist outside of Columbia, but I'm not too surprised to see a professor criticize the school for somewhat blatant wrongdoing.
The extreme competition seems to be overtaking all of the elites. MIT is the #1 university brand in the world, but it spent most of its integrity and soul to get there.
There's a fine line you need to walk to become a faculty member at elite schools.
You need to lie a little on grant applications to align when you want to do to what will be funded. You need to lie a little bit in publications so they have impact. You need to fight for credit, sometimes on work you didn't do. As these become normalized, winners do these more and more; otherwise, you won't get that faculty job.
The culture slowly trickles down. I think most grad students at MIT are still honest, but not the most successful ones (most of the ones who find faculty jobs are at least a little bit crooked). Second-tier school faculty slots are filled with graduates of first-tier schools.
MIT has a traditional hacking culture which emphasizes breaking rules. This worked well when this involved climbing on rooftops, but it works less well when the endowment is O($100M) per faculty member, and there's money to embezzle through complex corporate schemes and financial games.
Contentions don't exist too much between faculty and admin right now, but they definitely do between students and Institute. MIT grad students are working to unionize, and the Institute, to union-bust.
I can't speak to any run-down labs, but regarding cramped working conditions -- space is just always going to be at a premium in Manhattan. Arguably Columbia is going above and beyond to acquire more space by opening up the Manhattanville campus [0,1,2].
In general, when academics complain about things, I take it with a grain of salt. Having said that, I don't doubt for a minute that the administration juked the stats to get a better USN&W ranking.
[0] (https://neighbors.columbia.edu/news/robert-fullilove-appoint...
[1] https://www.stirworld.com/see-features-twin-buildings-with-s...
[2] https://ny.eater.com/2022/2/22/22939502/columbia-jerome-gree...
There are plans in the works to fix these infrastructure issues but the timeline is frustratingly long.
(a) the logical absurdity of adding together completely unrelated statistics to produce a single measure of merit — the key point being that you can produce an astonishing range of different results depending on the relative weight each component factor is assigned. And there is simply no logical, a priori basis for establishing such a weighting objectively. Do SAT scores count 30% of the total score? 32.2%? 18.78234%? (How about zero?) It's the classic apples + oranges – bananas/kumquats = fruit salad approach to statistics, and is completely meaningless."
https://budiansky.blogspot.com/2012/02/us-news-root-of-all-e...
The US News & World Report's ranking system apparently depends (at least in part) on class size (8% of ranking), proportion of faculty with terminal degrees (3% of ranking), proportion of faculty who are full-time (1% of ranking), student-faculty ratio (1%), financial resources per student (10%), retention and graduation rates (35%), student debt (5%), and "'peer assessment survey' [20%] in which college presidents, provosts, and admissions deans are asked to rate other institutions."
All that data can be valuable but says little directly about the quality of education received by students or the quality of research, though the latter might be outside USN's scope. Sometimes, objectivity is a distraction because the objective data is too limited to inform us. For example, objective data about a play - number of words, etc. - doesn't tell you much about it.
In those cases, expert judgment is an excellent tool and I think USN's 'peer assessment survey' is the right approach for many purposes, but USN surveys people with narrow knowlede - administrators, including admissions officers.
I might look at Times Higher Education, which performs a 'reputation survey' that asks (IIRC) thousands of published, tenured faculty about the quality of other schools' departments in their field of expertise. These are people with expertise and significant access, though of course their understanding of the student's perspective will be limited. (Columbia rates around 10th, IIRC.)
I've been obsessed with computer science rankings for a while, and worked with others to compare US News Computer Science rankings with other computer science rankings: https://drafty.cs.brown.edu/csopenrankings/
Some rankings are naturally harder to game, like that your bachelors or doctoral graduates get hired as professors at other research universities. Or that professors at your universities win best paper awards at leading conferences. So I've been looking at whether various rankings are "biased".
There's some clear biases with US News, like it ranks CalTech as 11, but CalTech is listed as rank 39 in the aggregate because it does poorly on faculty publications and best paper awards. Yale CS is another example of a highly ranked department by US News (rank 20), that has an aggregated ranking of 35. Harvard CS does amazing with placement (ranked 6 for their undergraduate and doctoral students becoming professors), but has an aggregate ranking of 23.
That's a weird metric for judging education quality (which is at least what college rankings purport to do). I can see cases where either high or low values are markers of either good or bad education quality.
“ In each previous section, what was at issue was a discrepancy between two figures, both obtained from data provided by Columbia. Regarding class sizes, the information provided to U.S. News conflicts with the information in the Directory of Classes. Regarding terminal degrees, the information provided to U.S. News conflicts with the information in the Columbia College Bulletin. Regarding full-time faculty, the information provided to U.S. News conflicts with the information provided to the Department of Education. And so on.”
The big conclusions are that Columbia seems to be providing inaccurate data, and that one of the outcomes of chasing rankings are that transfer students end up as second class students, at least at Columbia.
I think it’s a data driven case of how elite universities can perpetuate a system of reduced social mobility. For Columbia, the objectively more poor transfer students support the more wealthy non-transfer students, and the graduation rate shows that disparity.
Interesting — when I was in law school, transfer students benefitted from the fact that they avoided our harsh first-year grading curves (20% A's, 60% B's, and 20% C's). The curve for second- and third-year classes was much more lenient, and it was relatively easy to get a 3.5 average during those years.
As a result, the transfer students disproportionately ended up with the highest GPAs, which seemed unfair to those of us who were there all 3 years.
He seems to consider Ph.D. the only terminal degree, and specifically calls out the Arts school for having faculty with Master’s. But in most fields in the Arts, an MFA (though usually not an MA) is terminal.