"The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation."[a]
The very smart folks at the Alan Turing Institute are learning firsthand how bitter the lesson can be.
---
Short-term improvements made by domain-specific AI result in better outputs than more general AIs, ceteris paribus. But these better outputs can then later be fed back into more powerful general purpose AIs, and consuming the data*compute product from the domain-specific models is a very effective way to train domain-specific behavior.
Today, we see this in reverse – people are training smaller models based on outputs from GPT-4. However, I expect that we'll start to see more and more training going the opposite direction in the future: Domain-specific generative models will be used to build scenarios for large general-purpose AIs to train against.
Here's a concrete example – image diffusion models are really bad at physics, so you can't tell one to draw a person upside down, because it's not well-represented in the dataset, and if you force it to with something like controlnet you typically get a disfigured and horrific image. So obviously diffusion models are not the best long-term solution for image generation. But how do you get this concept of "upside down" into an AI model? Well, maybe you add some kind of neat segmentation technique that involves using several diffusion models and rotating and stitching together their outputs. Great, you made an upside-down generator.
Now, you generate 100,000 images of "upside down" people, and the next advance in image generation AI can come along and learn that concept with ease thanks to the larger data set that it has.
So it's not just that "more compute wins", it's more like: not only does more compute wins, it wins even more because short-term improvements feed directly into the data pipeline that enables it to win.
The reason the US was able to pull off a leap forward via OpenAI or LLaMa, is due to having Nvidia as basically a national treasure, but it's an integrated whole, the US has all the components necessary and the ecosystem that produces all the components (including talent, thought process, money, start-ups, pay scale (to lure talent)).
The Europeans have never lacked for the brains side of being able to do it, certainly. Until they fill out the rest of the ecosystem they won't be able to really compete in AI (they'll lag far behind, with lots of blaming and empty promises big government projects). And China is its own worst enemy these days, all we need is for Xi to remain in power indefinitely and he'll throttle their potential as a global competitor.
The US is close to locking up another round in the tech wars, riding on the same approach that has served it so well since WW2. Hopefully our do-something legislators are hands off long enough (ie don't snatch defeat from the jaws of victory).
...and a culture of risk-taking and ambition to change the world.
The heart of the bitter lesson is "don't try to codify "insight" into the process". It's basically the age old "you don't know what you don't know".
The Transformer is kind of a perfect example. It boasts algorithmic improvements over RNNs and LLMs are by far the best performing take on language modelling ever. And yet the architecture itself has basically no breakthrough from understanding language itself. It's an improvement over standard RNNs but not really because of any new found insight or implementation on language itself.
Basically trying to cram human high level instincts/insights into the process of solving a problem doesn't work better than giving a general architecture tons of data and letting it figure that all out by itself.
If you look at what, e.g. Andrew Ng was talking about last year, there was a big emphasis on "small data" and getting good datasets.
Btw, Rich Sutton was born in the U.S. but renounced his American citizenship, becoming Canadian.
So the plot twist comes with a bit of irony!
It seems like Netherlands completely missed the boat on LLMs, too… but I don’t blame them. I just hope they pivot quickly.
Let this be a harbinger for the rest of the industry -- you can "get away" with paying far below market for talent, but you make yourself vulnerable to getting generationally leapfrogged like this because the cutting edge work in your art is happening under someone else's roof.
That said, if the examples he gives of what they were working on are representative, it implies they are spending their time chasing trends instead of doing fundamental research and chased the wrong ones. I'd suspect they re doing more than what was implied though.
Is it any surprise that when you hire people with PhDs and you pay them less than bartenders that you don't get the best people? Those people go to Google or Microsoft and make 10x as much.
I made $350k. I have a college degree in computer science, but no PhD.
Why would any top talent take a job at 50k?
SF wages would literally put them to the infamous greedy 1%.
This is part of a trend: the birthplace of the Industrial Revolution lost most of it's manufacturing industry, the nation that once championed free trade is cloistering itself on Brexit, the inventors of football (a.k.a. soccer) are falling behind on the World Cup, the nations that once were vassals to the British Empire are drifting away from the Commonwealth, and Scotland ...
Well, it kinda looks like Great Britain is becoming less Great each passing decade.
In World War I, four empires collapsed: czarist Russia, Ottoman Empire, Austro-Hungarian and the British Empire.
The Austrians already accepted it. The Russians and Turks are refusing desperately to accept it. The British didn't even notice it.
I'm a brit and unfortunately I think you're broadly right. I wish my children were growing up in a more confident society that looks outward and forward, not inwards and to the past.
The current government wants the uk to be an "powerhouse" (or whatever the latest slogan is) for AI, EVs, biotech, etc. But they're stuck in a view of the world that is somewhere between nostalgic nationalism and outright nativism. And pumping out constant culture-war paranioa about "immigrants" and betrayal.
So we get brexit - an economic catastrophe - and their desperate need to be seen to be competing rather than collaborating on science research. Non-uk students avoid our universities because they can't get visas or because they're actively harrassed by the government once they're here. Scientists leave the UK for countries where they can get grants and decent equipment and actually do collaborative research.
Meanwhile we build useless aircraft carriers that we can't afford to operate, and stage bread and circuses pagents in London - while the food banks have never been busier, and huge numbers of people grind away in poverty. But the important thing is that we're good at tradition and we have history.
The country's industrial and scientific base is decaying, as is its infrastructure outside of the London bubble, and none of the grifters and fantasists running the country have a clue. I'll be advising my kids to emigrate.
"Much excitement in the family whatsapp group this morning at the prospect of an Indian and a Pakistani bringing about the partition of Britain" - https://twitter.com/anandMenon1/status/1640612440051163137
An Indian I work with also found this quite amusing.
Turkey did become irrelevant after WWI, but is far more of a power today than it was 30 years ago, and given the relative dysfunction of the neighborhood, it's only going to get stronger.
I think it is just… the idea of doing something “in a country” is just generally an unnecessary and vexing constraint. “American” companies are leading the pack here because America is a nice regulatory environment for a Global company to plop down a headquarters.
Also it is research stuff, there’s a random aspect to it. We’ve just got more dice to roll by virtue of having more researchers in the US.
Possibly also Malta, given the brief window of opportunity where they might have ended up with seats in Westminster, but I'm much less confident about that.
What if LLMs become stagnant and some other approach is needed, perhaps in tandem with it? Work that seems irrelevant today may become useful then.
I’m not saying that their approach is this, just that failure to produce a LLM doesn’t necessarily equal an embarrassing failure.
There might be some argument on other metrics - publications, students trained, lectures, recognition, whatever, that show this institute is lagging. But not being part or llms implies nothing about their success or failure.
> I’m not saying that their approach is this, just that failure to produce a LLM doesn’t necessarily equal an embarrassing failure.
That's not really a rebuttal against the article though. They specifically state that they aren't blaming the institute for not producing and LLM nor even for not predicting them.
Substantively, this just seems ludicrously short-sighted. If all investment was focused on the most recent AI model to have success, DL wouldn't exist. I'm definitely saving this article for the day in the near future when people realize, hey, maybe the entire field of AI & cognitive science wasn't just wasting time for the last 70 years, and maybe those ideas will also be carried on the rising tide of Moore's Law.
> Their top piece of content was “Data as an instrument of coloniality: A panel discussion on digital and data colonialism”. Do any AI specialists think this work is going to push the bleeding edge of AI research in the UK?
Because I think that speaks directly to his point and has nothing to do with woke wars.
The state forced him to undergo chemical castration because of his homosexuality. Same state kept his achievements and contribution to the war effort a secret up until after his death, so they could persecute a war hero without the public knowing about it.
Crazy to think he was convicted in 1952. Same year Elizabeth became Queen and head of state. She could have simply overturned his conviction. The man saved women, men, children, of all races and orientations from an horrible end. Had he not cracked the enigma's cryptography, there would most likely remain nothing today of the crown that persecuted him. Blown to dust by the Luftwaffe.
If only the British government had extended the same humanity to Turing himself.
It is such self-inflicted misery. The bright future for the UK would have been as a core member the EU, helping shape a large economic space with massive amounts of talent, happily moving around the wonderful cities, taping the endless cultural heritage and building a digital society congruent with the European way of life and values (which in various important ways differs from the US). While the nationalistic reflex is not as strong elsewhere in Europe it is still a hindrance that shows up in countless frictions.
In any case, LLM's are just another stop of the journey. If people stopped digging while in a hole there is always a way out.
Or anything else, for that matter? The current economic woes of Britain are no greater or worse than that of its western EU peers, contrary to the repeated prattle of the UK having committed "economic suicide". Or, rather, if the UK has committed suicide, so has the rest of western Europe.
That people in Europe have concrete and non-trivial views about important matters is fairly evident if you simply check what kind of laws and regulations they are passing. So there is a distinct vision.
That there is plenty of research talent that is educated to cutting-edge level with citizen tax money is also pretty evident if you check who has been behind some important such AI "innovations". So there is also capability to execute on the vision.
The weakness of the EU is that it can't (not fast enough anyway) build the structures (internal markets etc) that will overcome nativist instincts. To accumulate the resources and critical mass where it is needed. Brexit made everything just a little bit harder.
The UK leaving the EU means, for example, that its cutoff from academic research networks. People are by now even studying this effect quantitatively [1]
Brexit also means that the entire financial system in the EU (which was very London centric) is in shambles. London as a financial center is declining [2] with implications for the shape of Europe's capital markets, the developments around new forms of digital finance, fintech etc.
So, yeah, go on with your livid denial of how disastrous that fateful decision.
[1] https://pubs.aip.org/aip/cha/article/30/6/063145/1030255/Ana...
From an (conservative leaning) American perspective - the Alan Turing Institute is an perfect example of broken European thinking. Real innovative happens at highly motivated private companies, private companies who have huge profits to make if they are successful, and who pay huge salaries to talented employees, talented employees who desire to be rich. Government institutions more frequently pay mediocre salaries, reward politicking rather than excellence, and are undermotivated to solve problems, because they are unlikely to receive commensurate share of the benefits.
Now that Britain has left the EU, we Americans hope they will stop pissing money down the drain on useless government programs like the Alan Turing Institute and will instead reorient to a more business friendly environment, which recognizes that innovation happens mostly in successful businesses, and that governments role is to create that business friendly environment.
I think one can trace back most of the inventions in human history, to be an outcome of a scenario where more than one reason above is true.
So going by this, even if we had a determined individual in the Alan Turing institute, but none of the remaining three scenarios were true, his chances of success are slim in comparison to his counterparts at other places like Canada or the US.
Look at china and you can easily find an environment where a determined individual under state priorities gets funding abd support for critical technologies.
Probably now the state priorities will change and in future you can expect some new breakthroughs happenning in the institute as well.
however after this, it's a somewhat confusing post if you click the links, because after criticizing the government for not being willing to focus on LLMs, he links to a press release about 100 million pounds being devoted to training LLMs.
it's unclear to me what the objection is to this project, or what is meant by "open source" that is different from a normal publicly funded government research project.
Quite what that course is, remains to be seen.
My personal experience is that the UK loves giving out funding to established/safe organisations that it knows won’t cause a (positive or negative) splash.
I’ve seen very few UK government initiatives funding genuinely small organisations, but of those that I’ve seen, they have been well executed.
https://en.wikipedia.org/wiki/Turing_Institute
Founded by Donald Michie who worked at Bletchley Park!
Donald Michie once overheard his secretary explaining to someone over the phone how to pronounce his name, in a thick Scottish accent:
"It's Donald, as in Duck, and Michie, as in Mouse."
He was so pissed off, he refused to speak to her for a month! ;)
The Turing Institute put on the First Robot Olympics in September 1990 (check out the cool retro robot photos!):
https://en.wikipedia.org/wiki/First_Robot_Olympics
My favorite photo is the walking pizza box fascinating the kids:
https://en.wikipedia.org/wiki/First_Robot_Olympics#/media/Fi...
Edit: I never did use it, I'd got quite happy writing PostScript!
but if academia is not the institution in charge of innovating, then which is it?
I'm still a bit confused about this. I saw it like a contradiction but do not know what to do about it.
PhD degree programs telling "come innovate" into the institutions whose real job is to preserve knowledge?
> ...the question has become "Can machines do what we (as thinking entities) can do?" In other words, Turing is no longer asking whether a machine can "think"; he is asking whether a machine can act indistinguishably from the way a thinker acts. This question avoids the difficult philosophical problem of pre-defining the verb "to think" and focuses instead on the performance capacities that being able to think makes possible, and how a causal system can generate them.
The whole point of the test is that we'll NEVER build a machine that we think is "intelligent" or "thinking", because those words mean about as much as "love" or "meaning" in a scientific or engineering context.
Maybe the fact that we've built computers that pass the test means that we're close to AGI? Worth reexamining some priors :)
More: https://skybrian.substack.com/p/done-right-a-turing-test-is-...
Maybe this is good for the UK taxpayers and parliament to know. I don't think the rest of us - or anyone - should spend any more energy on this. Most publicly funded institution engaged in an affinity grift. News at 11. Stop funding it, the end.