It wasn't that long ago that there were posts on here about statisticians giving conference keynotes about how data science was basically old wine in new bottles, and were being ridiculed for being behind the times, etc.
Now we see that basically no one actually knows what's going on. My guess is when the dust settles a lot of things will be explained, but it won't be as different from established statistical and information theory as some would make it out to be. That is, some of this is new discovery and figuring out new territory, and some of it is neglecting basics that have been there all along.
My guess is the next phase of this is basically comp sci ML research rediscovering mathematical statistics and information theory.
That’s the trick students doing ML courses for their CS degree don’t get. The people leading ML research today are statisticians.
Most of them learned their chops before neural networks became cool again. It’s extremely hard to publish anything good without a very solid background in maths.
There is no large divide today between ML and statistics where people could be forgetting things. It’s very much the same field. The main issue is that statistics were already a somewhat immature part of mathematics even before the ML craziness.
Classical statistics did not predict double descent phenomena.
The whole idea, that you should not have more parameters than data, was wrong.
One of the techniques we learned in the course, was by the account of the Prof, working extremely well, and well documented, its just that nobody really understood why it worked and people were trying to prove why it got optimal solutions.
People could still flow boats before Navier-Stokes? Yes, so people had boats, i.e. stuff. Now we have Navier-Stokes which is science, not stuff.
Btw, Yan LeCun knows this much better than me, but neural networks are already ancient. The first "artificial neuron", the Pitts & McCulloch neuron, was described in 1938. Frank Rosenblatt created his Perceptron in 1958. Kunihiko Fukishima described the Neocognitron, daddy of the Convolutional Neural Network, in 1979. Hochreiter and Schmidhuber described Long-Short-Term Memory Networks in 1995. Yan LeCunn himself used CNNs to learn to recognise handwritten digits in zip codes in 1989.
That's at least 30 years of research on deep neural nets- almost a human generation. Many of today's postgraduate students studying deep neural nets weren't even born when all this was being done. If this is just the experimentation phase before we pass on to the theorising and understanding phase- when are we going to get to the understanding phase? In 100 years?
The main difference between alchemy and chemistry is that chemistry follows the scientific method.
When an alchemist learned something new, they kept the information to themselves and tried to profit from it. They wanted to turn lead into gold, and then keep the secret method to themselves.
A chemist profits by sharing the new information.
Could they be the modern equivalent of ancient alchemists? And would it be such a bad thing if they turned our lives into gold-plated jumble? Yann LeCun is co-recipient of the 2018 Turing Award for his work on neural networks. He argues that AI research is just a necessary adolescent phase characterized by trial and error, confusion, overconfidence and a lack of overall understanding. We have nothing to fear and much to gain from embracing this approach.
Alchemy is considered to have been a useful precursor to modern chemistry, more proto-science than hocus-pocus. We could enter a golden age of modern-day alchemy, in the best sense of the word, but we should never forget cautionary lessons of history.
Like... 3 or 4? What's that, two average wallets?
(Leaving aside the question of whether scamming a few hundred dollars out of someone's debit account is practical in any sense compared with all the other ways you could legally make way more money with the same skillset, of course.)