As well, in this article, a person bemoans the opportunities their parents gave them; piano lessons, math competitions, etc. Even though they have clearly benefited from these advantages, it's unclear to me if the person acknowledges the position of privilege that such an upbringing grants.
Though I almost wish I'd been more vigorously encouraged to study piano, since I have little time or motivation to practice it now.
As indicated, getting into Stanford still qualifies as winning the elite college admissions lottery, even if one might have preferred Princeton or MIT. (And Stanford likely provides some advantages as well.)
I've felt the same way in tech. I've seen so many job listings that are just an endless series of buzzwords or the latest iteration of data adtech analysis integrated lake warehouse whatever and found myself wondering when and where it translates into actually affecting the lives of anyone beyond interchangeable corporate middle managers.
Still, I feel the author should have explicitly acknowledged the fact that there are underprivileged kids out there that would kill for the opportunities he was given.
this is how people like Sam Bankman Fried and Caroline Ellison develop into having sociopathic patterns of behavior and dysfunctional realtionships, just constantly being pushed to play a game of artificial metrics from the time they are children.
its almost like taking away childhood from people and having a form of child labor.
Once a critical threshold of people start playing these RL career games, these terrible metrics get elevated to some weird group fairness metrics for hiring/admissions/compensation decisions, no matter how inequitable these games are and how disparate the outcomes are. The metric has moved beyond convenience to something hard to root out. The terrible metric becomes tyrannical, and complaining about it makes you sound like someone who “blames the game for being a bad player”. Even if it was a game you never wanted to play, to begin with.
Be the change you want to see in the world. Hire illegible people. Someone is going to tell you can’t and cite vague legal reasons. Unless that person is an actual lawyer giving formal legal advice (or your boss) just ignore him.
As a general matter you should always ignore non-lawyers citing vague legal reasons for why you can’t do something.
I still agree with you, because I want to believe it pays off in the long run. But it definitely comes at a real short term cost.
More importantly you are helping to fight an anti-human machine that sacrifices endless years of (young! healthy!) life to pointless grinding.
Teaching is great, so there's that. But literally every company will let your ad junct, and Professor of Practice usually pays more than 20% of a faculty salary. You can supervise PhD students as interns or by taking a courtesy affiliation (and often even have more impact on those students than their overworked and under-engaged advisors). And university classroom teaching in the US now looks a lot more like 90s/mid-naughts high school teaching.
Government contracting sucks, and the academic variety is not any better. I'd literally whether watch paint dry at a military base than contract for DARPA. NSF isn't actually that much better.
Who the fuck wants to be a combination high school teacher and federal government contractor? Saints or sociopaths, and there are a LOT more of the latter than the former in higher ed.
The problem is that AI is weird not because of academia. In fact, right not it has been captured by industry and it is why we've severely slowed down in progress[0]. Most people in the space now are working in industry labs. Frankly, you can do more, you get paid A LOT more (2-3x) and you have less bureaucratic bullshit. But I think you're keenly aware of this industry capture as you're mentioning aspects of it.
I don't want there to be any confusion: I think it is good that industry and academia work together. There's lots of benefits. But we also need to recognize that these two typically have very different goals, work at different TRLs, and have have very different expectations on the time where the work will be seen as impactful. Traditionally, academia has generally been the dominating player in the high risk high reward/low level research space (yes, much more goes on too, but of people that do this type of research, you think academia) while industry research typically is focused on higher TRL because they're focused on selling things in the near future. There's just a danger when you work too closely to industry: you can't have any wizards if you don't have any noobs.
But I'm not sure it is just ML that's been going this way. There's a lot of sentiment on this website where people dismiss research papers (outside ML) that show up here due to them not being viable products. I mean... yeah... they're research. We can agree that the value is oversold, but often that's by the publisher (read university) and not the paper (not sure if I can say the same for ML). But it's a kinda environmental problem because if everything has to be a product you can't be honest about what you did and if discussing the limits and where you need to still improve upon to actually get an product down the line gets you rejected, well... you just don't talk about that.
This is all the "RL hacking" or better known as Goodhart's Law. I've been saying we're living in Goodhart's Hell because it seems, especially in the last 5-10 years, we've recognized that a lot of metric hacking is going on and decided that the best course of action is not to resolve the issues, but lean into it. We've seen the house of cards that this has created. Crypto is a good example. Shame is if we kill AI because there is a lot of real value there. But if you're a chocolate factory and promise people that eating your chocolate will give them superpowers, it doesn't matter how life changingly delicious that chocolate is, people will be upset and feel cheated. Problem is, the whole chocolate industry is doing this right now and we're not Willy fucking Wonka.
[0] More progress looks like it is being made and there is a lot of progress that should have been made but wasn't but these types of nuances are a bit harder to discuss without intimate knowledge of the field. I'll say that diffusion should have happened much sooner but industry capture had everyone looking at GANs. Anything not, got extra scrutiny and became easy to reject due to not having state of the art results (are we doing research or are we building products?)
It's funny how so many designers of utopian paradises ended up creating dystopian hellholes, historically speaking, isn't it?
> The false quantification and rank ordering of things using AI will bring real-world weirdness in how people function, which has nothing to do with the functions they carry out. I call this the “Great AI Weirding”.
If anything AI is the sort of power tool that should let everyone make up their own rankings more easily, and be less limited by what others have decided people should be judged by.
Yes, higher quality work means higher chance of getting in, but we'd be naive to assume there's a strong correlation between the two given substantial evidence to the contrary and no clear mechanism to make such a connection.
> Usually researchers doing real SOTA work haven’t even had time for their other work to be cited heavily yet
Weird, I'd say the opposite. How to get high citations: tweak currently popular model/architecture so that it gets SOTA results, place on paperswithcode leaderboard (maybe don't even release code), release paper to arxiv. More datasets you cover, the better. Frankly, SOTA doesn't mean meaningful work. I even say this as an author of SOTA works.
The choice is now between increasingly tenuous/meaningless tenure after 5-10 years and a $500K/year lower bound for 10-12 years. That choice is... not a hard choice for anyone who values intellectual freedom. And the right answer sure as shit isn't the faculty position.
A good 50% of those faculty chasing chasing NeurIPS papers are doing so because at least once before going up for tenure they will apply for positions at big tech. They end up coming on not just non-executive, but often outside of management and at the bottom of the (Top IC)-[1-2] total comp band. If they net an offer they'll usually leave. The major barrier to an offer is usually ego and "is this personal actually humble enough to be useful to other people".
I don't care if you are talking about top talent here; that is an insane thing to say. As a lower bound? What percentage of software engineers / AI practitioners / data scientists are making $500k/year? 0.1%?
this reminds me a lot of the recent book The Fund about Bridgewater Capital, where they tried to come up with hundreds of metrics to rate each employee on, and then they made employees constantly rate each other on an iPad with this custom software they spent massive sums of money building. If you didnt rate other people you got fired. After years and years of this it was just all abandoned, complete and total waste.
It’s easy to forget that the drive to make highly-quantified decisions is largely a recent phenomenon, with in-person charisma having a much longer history. The recent widespread dominance of online video (compared to text) is really just more of a return to this kind of charisma after a long period of textual dominance.
I think the future is dominated by people that understand how to use video (and way down the line, 3D presence tools), not those that are good at optimizing AI tools.
One example of this, I think, is how video searches on TikTok/YouTube seem to be replacing Google searches for younger people. The searcher of 2030 isn’t going to read a perfectly individualized AI-created blog post, they’re going to watch a video by someone they trust.
TLDR: widespread video will herald a return to charismatic authority, displacing quantification systems of authority.
This is hidden metrics when you're getting a home loan, your insurance premiums, your success in dating sites, college admissions, whether somebody would hire you to do a dj gig at a nightclub, everything.
It's all recursive bullshit games, and you won't know which ones so you're just gonna run on as many treadmills as you can, all at once, while fully knowing some of them aren't even worth anything - just not which ones.
Maybe one of the treadmills is how pleasant and agreeable your opinions are. How they make people feel. Maybe you should shut the hell up before you get yourself into trouble and drive up your premiums.
We saw examples of simple quantification of people and activities, such as using counts of likes, stars, commits, and papers, and even more informed metrics like H-Index can lead to strange outcomes. AI will make the world even more quantifiable and, in many cases, falsely quantifiable. Ever since the first ape held two sticks in the left hand and three in the right and wondered which was more, ranking things by quantity is in our nature. The ape’s descendants have now discovered a ranking hammer, and everything will look like ordered lists. Ordered lists bring legibility, and what is not legible cannot be governed and subject to value extraction. The false quantification and rank ordering of things using AI will bring real-world weirdness in how people function, which has nothing to do with the functions they carry out. I call this the “Great AI Weirding”.
This reminds me of what Baudrillard terms "the precession of simulacra," in which successively more abstract representations of reality (from crude maps, to "hyperrealistic" GTA V-esque video game maps that are sometimes "more real" than reality itself) end up supplanting and taking place of the real. We no longer have people pursuing interests for their own sake (as per the "mathematician vs. mathlete" distinction made in the OP), but merely to construct a digital simulacrum of themselves, one which is able to inflate all the right metrics (there is a digression to Goodhart's Law [1] here) and win the same mechanistic games that we use as a proxy to measure value or worth in the world. Ceci n'est pas une pipe. [...] All of these things have gone beyond what they point to.
That's it; we no longer have real pipes, but only abstract symbols and depictions of them. Having "precessed" past the era of when symbols were meant to point to, refer to, an underlying referent, they have become objects, referents in and of themselves - objects partaking of a purely abstract, symbolic reality. Instead of taking the pointer as a clue to investigating the nature of the referent, we accept the reality of the indirection itself; anything underneath our numerical abstraction is simply an "implementation detail." In other words, they get huge information satisfaction from ads, far more than they do from the product itself. Where advertising is heading is quite simply into a world where the ad will become a substitute for the product, and all the satisfactions will be derived informationally from the ad, and the product will be merely a number in some file.
- Marshall McLuhan, 1966. https://www.youtube.com/watch?v=bNxo7fK-MJs
Consider this substitution: "all the satisfactions will be derived informationally from the [social media profile], and the [person] will be merely a number in some file." And yet of course, if you are "illegible," inscrutable, with little to no digital media presence nor statistics on your past "RL Career Game" history and performance, are you competent at all? Do you even _exist_? Does Harry, mathlete-turned-mathematician, even understand mathematics? Where is his Olympiad performance history? "[...] he became useless at competitions?" Oh.I have recently been watching John Vervaeke, assistant prof at UofT in the fields of cognitive science and Buddhist psychology, and his lecture series "Awakening from the Meaning Crisis," where he describes the phenomenon of cognitive fluency:
When you increase the ease at which people can process information, regardless of what that information is, they come to believe it as more real, they have more confidence in it, etc.
- John Vervaeke, "Continuous Cosmos and Modern World Grammar," Awakening from the Meaning Crisis, 2019. https://www.youtube.com/watch?v=C1AaqD8t3pk
We have increased the ease at which people can process information, _about other people;_ and regardless of any correlation between the "quantified self" or the person's metrics, and the person-themselves, we come to believe that simulacrum of the person more real, develop more confidence in the constructed persona they project and their capabilities, etc. Conversely, a dearth of information regarding an individual makes them "illegible," somehow fictional, less real.For all this, it takes a great leap of faith to object to playing these kinds of meaningless abstract games, at great personal risk and cost to one's self; yet I am not sure how to meaningfully participate in these systems without upholding and lending implicit assent to the fictions that they rely on. I am reminded of some meditations on Moloch regarding the matter.
One hope I have from Vervaeke's series is in his exploration of the notion of shamanism, and their role in society as developing new psychotechnologies and disrupting civilization's facilities for pattern recognition - altering their sense of what is important, altering their sense of selves, and altering the very way we think in the world. I look forward to a revival of the shamanistic tradition, applied to "cyberspace," (heh) to help us navigate the ways in which digital technology has altered our senses of meaning, what is actually important, and indeed of self and identity.