I think this is a very large overstatement. Many large problems still exist in robotics which can not be papered over with LLMs. I’m familiar with problems in manipulation and affordable sensing, which will not be solved via llms, and are fundamentally necessary for reliable and safe interaction with the real world. I’m confident there are many others. LLMs probably will help with high level planning. But that’s a subset of the many problems that keep robots from becoming mass market products.
I think people are excited about robotics because of the many demo videos that have been coming out of various companies. However, almost all of these videos are smoke and mirrors. If you ask somebody who works on those demos, they will unashamedly tell you all the ways they worked around their technical limitations to get just the right footage. The PR departments are less upfront with that info.
This was very true of the dotcom bubble. The entire "web" was new, and the promise was everything you use it for today.
Pets.com was a laughing stock for years as an example of dotcom excess, and now we have chewy.com, successfully running the same model.
Webvan.com, was a similar example of "excess" and now we have Instacart and others.
I looked up webvan just now--the postmortem seems relevant:
"Webvan failed due to a combination of overspending on infrastructure, rapid and unproven expansion, and an unsustainable business model that prioritized growth over profitability."
The problem of dotcom is we needed a cultural shift. I had my first internet date during the dot com bubble and I remember we would lie to people about how we met because the idea sounded so insane at the time to basically everyone. In 1999 it seemed kind of crazy to even use your real name online let alone put your credit card into the web browser.
Put your credit card into the internet browser then a stranger brings you items in their van? Completely insane culturally in 1999. It would have sounded like the start of an Unsolved Mysteries episode to the average person in 1999. There was no market for that in 1999.
The lesson I take from dotcom is we had this massive bubble and burst over technology that already existed, worked flawlessly and largely just needed time for the culture to adapt to it.
The main difference this time is we are pricing in technology that doesn't actually exist.
I can't think of another bubble that was based on something that doesn't exist. The closest analogy I can think of is the railroad bubble but with the trains not actually existing outside of some vague theoretical idea that we don't actually know how to build. A bubble in laying down rail because of how big it will be when we figure out how to build the trains.
The only way you would get a bubble that stupid would be to have 50-100 years of art, stories and movies priming the entire population on the inevitability of the train.
I understand training is still costly, but it's not unimaginable for it to turn profitable as well if you think believe they'll generate trillions in value by eliminating millions of jobs.
Do you have a test for this?
Or is it based on the presumption that reasoning skills cannot evolve, it can only be the result of "intelligent design"?
Is this the same crash that happened when tariffs were announced? What about the 2022 crash? What sort of crash are we talking about?
IMO the AI incumbents want to provoke smaller pullbacks on the way up to A) kill competitors who can't handle it and B) prevent a catastrophic crash that would actually hurt them.
That's why we see stuff like Thiel selling his NVDA holdings. He's just going to buy back in later.
https://www.stlouisfed.org/publications/regional-economist/a...
https://www.theguardian.com/business/1999/dec/20/nasdaq.efin...
This time, "AI" has been hyped up more than tech in 1999 by the media. The media has just reversed course in 2025 because they found out that most people hate "AI". in 2023-2024 it was mainly hype.
Yet, the markets continued rapidly upward for another FOUR years. Shorting the high-flying stocks with negligible income in 1997 or 1998 would have been completely sensible. And it would have wiped you out, as you would have been years too early.
It just proves the adage: "The markets can remain irrational longer than you can remain solvent."
Today, the levels of (over-)investment compared to investment are even more extreme. But when is the time to call it?
"The notion that Amazon.com will be allowed to corner the market in on-line book sales is wholly implausible."
It’s reasonably obvious that there are some very high expectations baked in to certain equity valuations.
Leave it to the reader to take a view on whether it makes sense.
This isn't rocket science - do anybody sane believe that OpenAI will spend what, 1.5 Trillion $ they project ? Quite a big chunck of Oracle, Nvidia and others projections are based on this 1.5 Trillion $. They are loosing money and their revenue is 1% of this figure.
And is it really overvalued if AGI is achieved? Sounds like risk:reward profile is already priced in the gamble, and the valuation is appropriate....But I guess if you take it to the logical conclusion... if AGI is achieved, everyone will be out of a job, so scarcity-based economics, based on scarce labor input, itself will have to be redone. Wild speculation there.
OpenAI, NVIDIA, Microsoft, Apple, Amazon, etc. obviously won’t collapse.
The money being thrown around is mind boggling. However, we’ve been throwing this type of money around for a handful of years now.
Tons of layoffs, homelessness, corruption, unemployment, difficulties for everyone to find a job, the incoming SNAP meltdown, government shutdown and the mess it’s going to cause for a while. None of it makes sense. It’s pure crazy because everything should have imploded by now. The tech layoffs and government layoffs alone should be causing a shitstorm of misery out there, but it’s hidden somehow.
AI isn’t going away. It’s here to stay. It has already become embedded into so many core things we do everyday. So many jobs are affected by it. Like, marketing, graphic design, writing, so many jobs in hollywood, like storyboarding and voiceover work, and the creative process of so many things. So many scenes today in movies are CGI and it’s hard to tell, like 3-second scenes, or CGI overlays. All of that will be created with a prompt in the next couple of years. Sure, some editing will be needed for the generated scenes, but with far less staff. The key takeaway here is that this equates to millions of jobs vanishing rather quickly. Core jobs that people of all ages based their careers on.
Don’t get lost in the details of AI generating garbage or not. The remaining companies that survive will continue to make it better. Don’t think for a second there will be a resurgence in these jobs coming back because everyone thinks a human can do it better.
All those data center GPU buildouts will not go waste. We’re headed for a dystopia that’s even worse than the one we’re living in right now.
Wait until a few people are killed by police for stealing food from a grocery story because they are starving and need to feed their families. It will be the first time we get close to a civil war becoming a reality.
I'm amazed at the sheer volume of resources allocated to the above so quickly for example; more than any other boom I've seen. Society can raise trillions quickly if it means not employing people I guess? Knowing basic economics I don't buy the utopia case with AI for the majority of people, particularly the middle class. To be clear most people I meet anecdotally (especially outside the tech space) are net negative on the changes AI is doing to their lives, even if they are in jobs like trades.
It has had a profound impact; probably much more (no matter which camp wins the argument, bulls or bears on AI) than its inventors ever thought it would. It makes me think of whether the people who invented this, once they see the end result, will be happy with their invention and the changes it will create in the world. If they aren't happy in hindsight it says something about the unfortunately all too common naivete of the techie in general and the impact of their work on society/economics/etc until it does eventually occur.