There's so much demand around this, people are just super eager to get the information. I can understand why, because it was my favorite talk as well :)
Edit: the emoji at the end of the original sentence has not been quoted. How a smile makes the difference. Original tweet: https://x.com/karpathy/status/1935077692258558443
Reminds me of work where I spend more time figuring out how to run repos than actually modifying code. A lot of my work is focused on figuring out the development environment and deployment process - all with very locked down permissions.
I do think LLMs are likely to change industry considerably, as LLM-guided rewrites are sometimes easier than adding a new feature or fixing a bug - especially if the rewrite is something into more LLM-friendly (i.e., a popular framework). Each rewrite makes the code further Claude-codeable or Cursor-codeable; ready to iterate even faster.
Software 3.0 isn't about using AI to write code. It's about using AI instead of code.
So not Human -> AI -> Create Code -> Compile Code -> Code Runs -> The Magic Happens. Instead, it's Human -> AI -> The Magic Happens.
This is why I think the AI industry is mostly smoke and mirrors. If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.
Recursive self-improvement is literally the endgame scenario - hard takeoff, singularity, the works. Are you really saying you're dissatisfied with the progress of those tools because they didn't manage to end the world as we know it just yet?
Yes and we've actually been able to witness in public the dubious contributions that Copilot has made on public Microsoft repositories.
3 to 5 companies iso of the hundreds of thousands who sell software now
https://leanpub.com/patterns-of-application-development-usin...
I kind of expect that from someone heading a company that appears to have sold-the-farm in an AI gamble. It’s interesting to see a similar viewpoint here (all biases considered)
What does this mean? An LLM is used via a software interface. I don’t understand how “take software out of the loop” makes any sense when we are using reprogrammable computers.
Started learning metal guitar seriously to forget about industry as a whole. Highly recommended!
> imagine that the inputs for the car are on the bottom, and they're going through the software stack to produce the steering and acceleration
> imagine inspecting them, and it's got an autonomy slider
> imagine works as like this binary array of a different situation, of like what works and doesn't work
--
Software 3.0 is imaginary. All in your head.
I'm kidding, of course. He's hyping because he needs to.
Let's imagine together:
Imagine it can be proven to be safe.
Imagine it being reliable.
Imagine I can pre-train on my own cheap commodity hardware.
Imagine no one using it for war.
The danger I see is related to psychological effects caused by humans using LLMs on other humans. And I don't think that's a scenario anyone is giving much attention to, and it's not that bad (it's bad, but not world end bad).
I totally think we should all build it. To be trained from scratch on cheap commodity hardware, so that a lot of people can _really_ learn it and quickly be literate on it. The only true way of democratizing it. If it's not that way, it's a scam.
I don’t think it’s the 4th wave of pioneering a new dawn of civilization but it’s clear LLMs will remain useful when applied correctly.
I stick by my general thesis that OSS will eventually catch up or the gap will be so small only frontier applications will benefit from using the most advanced models
It felt like that was the direction for a while, but in the last year or so, the gap seems to have widened. I'm curious whether this is my perception or validated by some metric.
Another way to put it, is that over time you see this, it usually takes a little while for open source projects to catch up, but once they do they gain traction quite quickly over the closed source counter parts.
I think it's a bit early to change your mind here. We love your 2.0, let's wait for some more time till th e dust settles so we can see clearly and up the revision number.
In fact I'm a bit confused about the number AK has in mind. Anyone else knows how he arrived at software 2.0?
I remember a talk by professor Sussman where he suggest we don't know how to compute, yet[1].
I was thinking he meant this,
Software 0.1 - Machine Code/Assembly Code Software 1.0 - HLLs with Compilers/Interpreters/Libraries Software 2.0 - Language comprehension with LLMs
If we are calling weights 2.0 and NN with libraries as 3.0, then shouldn't we account for functional and oo programming in the numbering scheme?
Nerds are good at the sort of reassuring arithmetic that can make people confident in an idea or investment. But oftentimes that math misses the forest for the trees, and we're left betting the farm on a profoundly bad idea like Theranos or DogTV. Hey, I guess that's why it's called Venture Capital and not Recreation Investing.
If anything it seemed like the middle ground between AI boosters and doomers.
Software 2.0? 3.0? Why stop there? Why not software 1911.1337? We went through crypto, NFTs, web3.0, now LLMs are hyped as if they are frigging AGI (spoiler, LLMs are not designed to be AGI, and even if they were, you sure as hell won't be the one to use them to your advantage, so why are you so irrationally happy about it?).
Man this industry is so tiring! What is the most tiring is the dog-like enthusiasm of the people who buy it EVERY.DAMN.TYPE, as if it's gonna change the life of most of them for the better. Sure, some of these are worse and much more useless than others (NFTs), but in the core of all of it is this cult-like awe we as a society have towards figures like the Karpathy's, Musks and Altmans of this world.
How are LLMs gonna help society? How are they gonna help people work, create and connect with one another? They take away the joy of making art, the joy of writing, of learning how to play a music instrument and sing, and now they are coming for software engineering. Sure, you might be 1%/2% faster, but are you happier, are you smarter (probably not: https://www.mdpi.com/2076-3417/14/10/4115)?
"We need to rewrite a lot of software," ok... why?
"AI is the new electricity" Really now... so I should expect a bill every month that always increases and to have my access cut off intermittently when there's a rolling AI power outage?
Interesting times indeed.
Who wants to start a pool on when the first advertisement for "Software 3.0" goes up in an airport somewhere?
great name already
They want to onboard as many people on their stuff and make them as dependent on it as possible, so the switching costs are more.
It's the classic scam. Look at what Meta are doing now that they reached end of the line and are trying to squeeze out people for profitability:
- Bringing Ads to WhatsApp: https://apnews.com/article/whatsapp-meta-advertising-messagi...
- Desperately trying by any illegal means possible to steal your data: https://localmess.github.io/
- Firing all the people who built their empire: https://www.thestreet.com/employment/meta-rewards-executives...
- Enabled ethnic cleansing in multiple instances: https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...
If you can't see the total moral bankruptcy of Big Tech, you gotta be blind. Don't Be Evil my ass. To me, LLMs have only one purpose: dumb down the population, make people doubt what's real and what's not, and enrich the tech overlords while our societies drown in the garbage they create.
"Q: What does your name (badmephisto) mean?
A: I've had this name for a really long time. I used to be a big fan of Diablo2, so when I had to create my email address username on hotmail, i decided to use Mephisto as my username. But of course Mephisto was already taken, so I tried Mephisto1, Mephisto2, all the way up to about 9, and all was taken. So then I thought... "hmmm, what kind of chracteristic does Mephisto posess?" Now keep in mind that this was about 10 years ago, and my English language dictionary composed of about 20 words. One of them was the word 'bad'. Since Mephisto (the brother of Diablo) was certainly pretty bad, I punched in badmephisto and that worked. Had I known more words it probably would have ended up being evilmephisto or something :p"
Unbelievable. Perhaps some techies should read Goethe's Faust instead of Lord of the Rings.
> The more reliance we have on these models, which already is, like, really dramatic
Please point me to a single critical component anywhere that is built on LLMs. There's absolutely no reliance on models, and ChatGPT being down has absolutely no impact on anything beside teenagers not being able to cheat on their homeworks and LLM wrappers not being able to wrap.
I love Andrej, but come on.
Writing essentially punch cards 70 years ago, writing C 40 years ago and writing Go or Typescript or Haskell 10 years ago, these are all very different activities.
The main thing that changed about programming is the social/political/bureaucratic side.
> LLMs make mistakes that basically no human will make, like, you know, it will insist that 9.11 is greater than 9.9, or that there are two bars of strawberry. These are some famous examples.
But you answered it: It’s a stupid mistake a human makes when trying to mock the stupid mistakes that LLMs make!
One bundles "AGI" with broken promises and bullshit claims of "benefits to humanity" and "abundance for all" when at the same time it takes jobs away with the goal of achieving 10% global unemployment in the next 5 years.
The other is an overpromised scam wrapped up in worthless minted "tokens" on a slow blockchain (Ethereum).
Terms like "Software 3.0", "Web 3.0" and even "AGI" are all bullshit.
It takes mouse clicks, sends them to the LLM, and asks it to render static HTML+CSS of the output frame. HTML+CSS is basically a JPEG here, the original implementation WAS JPEG but diffusion models can't do accurate enough text yet.
My conclusions from doing this project and interacting with the result were: if LLMs keep scaling in performance and cost, programming languages are going to fade away. The long-term future won't be LLMs writing code, it'll be LLMs doing direct computation.