Then there's a tendency to be so 'anti' that there's an assumption that anyone reporting that the tools are accomplishing truly impressive and useful things must be an 'AI booster' or shill. Or they assume that person must not have been a very good engineer in the first place, etc.
Really is one of those examples of the quote, "In the beginner's mind there are many possibilities, but in the expert's mind there are few."
It's a rapidly evolving field, and unless you actually spend some time kicking the tires on the models every so often, you're just basing your opinions on outdated experiences or what everyone else is saying about it.
And often when the two are arguing it's tricky to tell which is which, because whether or not it does something isn't totally black and white, there's some things it can sometimes do, which you can argue either way about that being in its capabilities or not.
Are the models that exist today a "true scottsman" for you?
There are also those of us who have used them substantially, and seen the damage that causes to a codebase in the long run (in part due to the missing gains of having someone who understands the codebase).
There are also those of us who just don’t like the interface of chatting with a robot instead of just solving the problem ourselves.
There are also those of us who find each generation of model substantially worse than the previous generation, and find the utility trending downwards.
There are also those of us who are concerned about the research coming out about the effects of using LLMs on your brain and cognitive load.
There are also those of us who appreciate craft, and take pride in what we do, and don’t find that same enjoyment/pride in asking LLMs to do it.
There are also those of us who worry about offloading our critical thinking to big corporations, and becoming dependent on a pay-to-play system, that is current being propped up by artificially lowered prices, with “RUG PULL” written all over them.
There are also those of us who are really concerned about the privacy issues, and don’t trust companies hundreds of billions of dollars in debt to some of the least trust worth individuals with that data.
Most of these issues don’t require much experience with the latest generation.
I don’t think the intention of your comment was to stir up FUD, but I feel like it’s really easy for people to walk away with that from this sort of comment, so I just wanted to add my two cents and tell people they really don’t need to be wasting their time every 6 weeks. They’re really not missing anything.
Can you do more than a few weeks ago? Sure? Maybe? But I can also do a lot more than I was able to a few weeks ago as well not using an LLM. I’ve learned and improved myself.
Chances are if you’re not already using an LLM it’s because you don’t like it, or don’t want to, and that’s really ok. If AGSI comes out in a few months, all the time you would have invested now would be out of date anyways.
There’s really no rush or need to be tapped in.
I understand all of what you said, but I can't get over that fact that the term AI is being used for these architectures. It seems like the industry is just trying to do a cool parlor trick in convincing the masses this is somehow AI from science fiction.
Maybe I'm being overly cynical, but a lot of this stinks.
The largest and most successful models all appear to have been built unethically. I want nothing to do with these companies or the slime who run them, and I will leave my professional field before I'll become their unwilling user.
I don't want machines to write my code. I want to write it. I want to solve the problems and find the bugs myself. The engineers I work with who seem to rely most heavily on these tools all seem to be losing their sharpness and problem-solving ability. Many of them praise the models for making it easy to write tests..(Software engineers who treats testing with this kind of carelessness and dismissiveness should lose their jobs.)
I like what I do. I like the way I do it. The day I have no choice but do my work, in essence, by scheduling a fucking meeting with a chipper chat bot and telling it what to do will be the day I retire and start a new career. I can't imagine a drearier way to work with technology.
It’s incredibly powerful and will just clearly be useful. I don’t believe it’s going to replace intelligence or people but it’s just obviously a remarkable tool.
But I think at least part of the dynamic is that the SV tech hype booster train has been so profoundly full of shit for so long that you really can’t blame people for skepticism. Crypto was and is just a giant and elaborate grift, to name one example. Also guys like Altman are clearly overstating the current trajectory.
The dismissive response does come with some context attached.
I feel like I see now more dismissive comments than previously. As if people, initially confused, formed a firm belief since. And now new facts don't really change it, just entrench them in chosen belief.
I found I had better luck with ChatGPT 3.5's coding abilities. What the newer models are really good at, though, is doing the high level "thinking" work and explaining it in plain English, leaving me to simply do the coding.
And that all wouldn't be a problem if it wasn't for the wave of bots that makes the crypto wave seem like child's play.
I got a modest tech following and you wouldn’t believe the amount I’m offered to promote the most garbage AI company
They're still pretty dumb if you want the to do anything (ie with MCPs) but they're not bad at writing and code.
But there are plenty of us who try and walk a middle course. A lot of us have changed our opinions over time. ("When the facts change, I change my mind.") I didn't think AI models were much use for coding a year ago. The facts changed. (Claude Code came out.) Now I do. Frankly, I'd be suspicious of anyone who hasn't changed their opinions about AI in the last year.
You can believe all these things at once, and many of us do:
* LLMs are extremely impressive in what they can do. (I didn't believe I'd see something like this in my lifetime.)
* Used judiciously, they are a big productivity boost for software engineers and many other professions.
* They are imperfect and make mistakes, often in weird ways. They hallucinate. There are some trivial problems that they mess up.
* But they're not just "stochastic parrots." They can model the world and reason about it, albeit imperfectly and not like humans do.
* AI will change the world in the next 20 years
* But AI companies are overvalued at the present time and we're mostly likely in a bubble which will burst.
* Being in a bubble doesn't mean the technology is useless. (c.f. the dotcom bubble or the railroad bubble in the 19th century.)
* AGI isn't just around the corner. (There's still no way models can learn from experience.)
* A lot of people making optimistic claims about AI are doing it for self-serving boosterish reasons, because they want to pump up their stock price or sell you something
* AI has many potential negative consequences for society and mental health, and may be at least as nasty as social media in that respect
* AI has the potential to accelerate human progress in ways that really matter, such as medical research
* But anyone who claims to know the future is just guessing
1. When(if) AGI will arrive. It's likely going to be smeared out over a couple months to years, but relative to everything else, it's a historical blip. This really is the most contention belief with the most variability. It is currently predicted to be 8 years[1].
2. What percentage of jobs will be replaceable with AGI? Current estimates between 80-95% of professions. The remaining professions "culturally require" humans. Think live performance, artisanal goods, in-person care.
3. How quickly will AGI supplant human labor? What is the duration of replacement from inception to saturation? Replacement won't happen evenly, some professions are much easier to replace with AGI, some much more difficult. Let's estimate a 20-30 years horizon for the most stubborn to replace professions.
What we have is a ticking time bomb of labor change at least an order of magnitude greater than the transition from an agricultural economy to an industrial economy or from an industrial economy to a service economy.
Those happened over the course of several generations. Society: culture, education, the legal system, the economy, where able to absorb the changes over 100-200 years. Yet we're talking about a change on the same scale happening 10 times faster - within the timeline of one's professional career. And still, with previous revolutions we had incredible unrest, and social change. Taken as a whole, we'll have possibly the majority of the economy operating outside the territory of society, the legal system, and the existing economy. A kid born on the the "day" AGI arrives will become an adult in a profoundly different world as if born on a farm in 1850 and reaching adulthood in a city in 2000.
1. https://www.metaculus.com/questions/5121/date-of-artificial-...
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.
What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.
And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.
Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).
There's been a dream of unsupervised models going hog wild on codebases for the last three years. Yet even the latest and greatest Claude models can't be trusted to write a new REST endpoint exposing 5 CRUD methods without fucking something up. No, it requires not only human supervision, but it also requires human expertise to validate and correct.
I dunno. I feel like this language grossly exaggerates the capability of LLMs to paint a picture of them reliably fulfilling roles end-to-end instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.
Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.
Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.
Most goes to housing, healthcare, and transportation.
Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.
But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.
Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
Or perhaps in the future everyone will work in finance. Everyone's a corporation.
Ramble ramble ramble
Won't companies always want to compete with one another, so simply using AI won't be enough. We will always want better and better software, more features, etc. so that race will never end until we get an AI fully capable of managing all parts (100%) of the development process (which we don't seem to be close to yet).
From Excel to Autocad there's been a lot of tools that were expected to decrease the amount of work ended up actually increasing it due to having new capabilities and the constant demand for innovation. I suppose the difference would be if we think AI will continue to get really good, or if it'll become SO good that it is plug and play and completely replaces people.
I think it did not work like that.
Automatic looms displaced large numbers of weavers, skilled professionals, which did not find immediately find jobs tending dozens of mechanical looms. (Mr Ludd was one of these displaced professionals.)
Various agricultural machines and chemical products displaced colossal numbers of country people which had to go to cities looking for industrial jobs; US agriculture used to employ 50% of workforce in 1880 and only 10% in 1930.
The advent of internet displaced many in the media industry, from high-caliber journalists to those who worked in classified ads newspapers.
All these disruptions created temporary crises, because there was no industry that was ready to immediately employ these people.
You will have to back that statement up because this is not at all obvious to me.
If I look at the top US employers in say 1970 vs 2020, the companies that dominate 1970 were noted for having hard blue collar labor jobs but paid enough to keep a single earner family significantly above minimum wage and the poverty line. The companies that dominate in 2020 are noted for being some of the shittiest employers having some of the lowest pay fairly close to minimum wage and absolutely worst working conditions.
Sure, you tend not to get horribly maimed in 2020 vs 1970. That's about the only improvement.
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
Assuming AI doesn't get better than humans at everything, humans will be supervising and directing AIs.AI is a lot like this. In coding for instance, you still need to have some sense of good systems design, etc. and know what you want to build in concrete terms, but you don't need to learn the specific syntax of a given language in detail.
Yet if you don't know anything about IT, don't know what you want to build or what you could need, or what's possible, then it's unlikely AI can help you.
The conclusion, sadly, is that CEO's will pause hiring and squeeze more productivity out of existing hires. This will impact junior roles the most.
go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.
IF there is intellectual/office work that remains complex enough to not be tackled by AI, we compete for those. Manual labor takes the rest.
Perhaps that’s the shift we’ll see: nowadays the guy piling up bricks makes a tenth of the architects’ salary, that relation might invert.
And the indirect effects of a society that values intellectual work less are really scary if you start to explore the chain of cause and effect.
Could you a priori in 1800 have predicted the existence of graphics artists? Street sweepers? People who drive school buses? The whole infrastructure around trains? Sewage maintainers? Librarians? Movie stuntmen? Sound Engineers? Truck drivers?
Right now AI is mostly focused on automating top levels of maslov pyramid hierarchy of needs rather than bottom physiological needs. Once things like shelter (housing), food, utilities (electricity, water, internet) are dirty cheap UBI is less needed.
AI can displace human work but not human accountability. It has no skin and faces no consequences.
> can be trained to the new job opportunities more easily ...
Are we talking about AI that always needs trainers to fix their prompts and training sets? How are we going to train AI when we lose those skills and get rid of humans?
> what do displaced humans transition to?
Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
It’s a common point now that LLMs don’t seem to be able to apply knowledge about one thing to how a different, unfamiliar thing works. Maybe that will wind up being our edge, for a time.
we assume there must be something to transition to. very well, there can be nothing.
we assume people will transition. very well, they may not transition at all and "dissappear" en masse. (same effect as as a war or an empire collapse)
Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.
Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.
At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.
There won’t be some big wealth redistribution until AI convinces people to do that.
The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.
> In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs
That may well be why these technologies were ultimately successful. Think of millions and millions being cast out.
They won't just go away. And they will probably not go down without a fight. "Don't buy AI-made, brother!", "Burn those effing machines!" It's far from unheard of in history.
Also: who will buy if no one has money anymore? What will the state do, when thus tax income goes down, while social welfare and policing costs go up?
There are other scenarios, too: everybody gets most stuff for free, because machines and AI's do most of the work. Working communism for the lower classes, while the super rich stay super rich (like in real existing socialism). I don't think it is a good scenario either. In the long run it will make humanity lazy and dumb.
In any case I think what might happen is not easy to guess, so many variables and nth-order effects. When large systems must seek a new equilibrium all bets are usually off.
Just because X can be replaced by Y today doesn’t imply that it can do so in a Future where we are aware of Y, and factor it into the background assumptions about the task.
In more concrete terms: if “not being powered by AI” becomes a competitive advantage, then AI won’t be meaningfully replacing anything in that market.
You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
Of course this doesn’t apply to every job, and indeed many jobs have already been “replaced” by AI. But any analysis which isn’t reflectively factoring in the reception of AI into the background is too simplistic.
The default logic is that AI will just replace all writing tasks, and writers will go extinct.
What actually seems to be happening, however, is this:
- obviously written-by-AI copywriting is perceived very negatively by the market
- companies want writers that understand how to use AI tools to enhance productivity, but understand how to modify copy so that it doesn’t read as AI-written
- the meta-skill of knowing what to write in the first place becomes more valuable, because the AI is only going to give you a boilerplate plan at best
And so the only jobs that seem to have been replaced by AI directly, as of now, are the ones writing basically forgettable content, report-style tracking content, and other low level things. Not great for the jobs lost, but also not a death sentence for the entire profession of writing.
But that's because, at present, AI generated video isn't very good. Consider the history of CGI. In the 1990s and early 2000s, it was common to complain about how the move away from practical sets in favor of CGI was making movies worse. And it was! You had backgrounds and monsters that looked like they escaped from a video game. But that complaint has pretty much died out these days as the tech got better (although Nolan's Oppenheimer did weirdly hype the fact that its simulated Trinity blast was done by practical effects).
I'm not saying it will happen, but it's possible to imagine a future in which AI videos are generally better, and if that happens, almost by definition, people will favor them (otherwise they aren't "better").
The negative label is the old world pulling the new one back, it rarely sticks.
I'm old enough to remember the folks saying "We used to have the paint the background blue" and "All music composers need to play an instrument" (or turn into a symbol).
If you seriously think this, you don’t understand the YouTube landscape. Shorts - which have incredible view times - are flooded with AI videos. Most thumbnails these days are made with AI image generators. There’s an entire industry of AI “faceless” YouTubers who do big numbers with nobody in the comments noticing. The YouTuber Jarvis Johnson made a video about how his feed has fully AI generated and edited videos with great view counts: https://www.youtube.com/watch?v=DDRH4UBQesI
What you’re missing is that most of these people aren’t going onto Veo 3, writing “make me a video” and publishing that; these videos are a little more complex in that they have separate models writing scripts, generating voiceover, and doing basic editing.
Since the vast vast majority of writers and commentators are not literal geniuses… they can’t reliably produce high quality synthetic analysis, outside of very narrow niches.
Even though for most comment chains on HN to make sense, readers certainly have to pretend some meaningful text was produced beyond happenstance.
Partly because quality is measured relative to the average, and partly because the world really is getting more complex.
Whether poor videos made by a human directly, or poorly made by a human using AI.
The use of software like AI to create videos with sloppy quality and reaults reflects on their skill.
Currently the use of AI leans towards sloppy because of the lower digital literacy of content creators with AI, and once they get into it, realizing how much goes into videos.
It's the same issue with propaganda. If people say a movie is propaganda, that means the movie failed. If a propaganda movie is good propaganda, people don't talk about that. They don't even realize. They just talk about what a great movie it is.
1: https://www.youtube.com/playlist?list=PLbB9v1PTH3Y86BSEhEQjv...
And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.
To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.
That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.
With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns.
For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort.
Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years.
What else is needed then?
What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.
Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.
> What a silly premise. Markets don't care.
You read the top sentence way too literally. In context, it has a meaning — which can be explored (and maybe found) with charity and curiosity.
I prefer the concepts and rigor from political economy: markets are both preference aggregators and coordination mechanisms.
Does your framing (voting machines and weighing machines) offer more clarity and if so, how? I’m not seeing it.
> “not even wrong” - nice, one of my favorites from Pauli.
AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).
It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities.
It's amazing how widespread this belief is among the HN crowd, despite being a shameless ad hominem with zero evidence. I think there are a lot of us who assume the reasonable hypothesis is "LLMs are a compelling new computing paradigm, but researchers and Big Tech are overselling generative AI due to a combination of bad incentives and sincere ideological/scientific blindness. 2025 artificial neural networks are not meaningfully intelligent." There has not been sufficient evidence to overturn this hypothesis and an enormous pile of evidence supporting it.
I do not necessarily believe humans are smarter than orcas, it is too difficult to say. But orcas are undoubtedly smarter than any AI system. There are billions of non-human "intelligent agents" on planet Earth to compare AI against, and instead we are comparing AI to humans based on trivia and trickery. This is the basic problem with AI, and it always has had this problem: https://dl.acm.org/doi/10.1145/1045339.1045340 The field has always been flagrantly unscientific, and it might get us nifty computers, but we are no closer to "intelligent" computing than we were when Drew McDermott wrote that article. E.g. MuZero has zero intelligence compared to a cockroach; instead of seriously considering this claim AI folks will just sneer "are you even dan in Go?" Spiders are not smarter than beavers even if their webs seem more careful and intricate than beavers' dams... that said it is not even clear to me that our neural networks are capable of spider intelligence! "Your system was trained on 10,000,00 outdoor spiderwebs between branches and bushes and rocks and has super-spider performance in those domains... now let's bring it into my messy attic."
On the one hand, what it says can't be trusted, on the other, I have debugged code I have written where I was unable to find the bug myself, and ChatGPT found it.
I also think a reason AI's are popular and the companies haven't gone under is that probably hundreds of thousands if not millions of people are getting responses that have hallucinations, but the user doesn't know it. I fell into this trap myself after ChatGPT first came out. I became addicted to asking anything and it seemed like it was right. It wasn't until later I started realizing that it was hallucinating information. How prevalent this phenomena is is hard to say but I still think it's pernicious.
But as I said before, there are still use cases for AI and that's what makes judging it so difficult.
No, I'm not worried about losing "control or relative status in the world". (I'm not worried about losing anything, frankly - personally I'm in a position where I would benefit financially if it became possible to hire AGIs instead of humans.)
You don't get to just assert things without proof (LLMs are going to become AGI) and then state that anyone who is skeptical of your lack of proof must have something wrong with them.
So many engineers are so excited to work on and with these systems, opening 20 prs per day to make their employers happy going “yes boss!”
They think their $300k total compensation will give them a seat at the table for what they’re cheering on to come.
I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.
Unless you have your own fully stocked private bunker with security detail, you will be affected.
As evidence of another possibility, in the US, we are as rich as any polis has ever been, yet we barely have systems that support people who are disabled through no fault of their own. We let people die all the time because they cannot afford to continue to live.
You think anyone in power is going to let you suck their tit just because you live in the same geographic area? They don't even pay equal taxes in the US today.
Try living in another world for a bit: go to jail, go to a half way house, live on the streets. Hard mode: do it in a country that isn't developed.
Ask anyone who has done any of those things if they believe in a "jobless utopia"?
Euphoric social capitalists living in a very successful system shouldn't be relied upon for scrying the future for others.
There is incredible pressure to release new models which means there is incredible pressure to game benchmarks.
Tbh a plateau is probably the best scenario - I don't think society will tolerate even more inequality+ massive job displacement.
We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
There will be fewer very large companies in terms of human size. There will be many more companies that are much smaller because you don't need as many workers to do the same job.Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.
LLMs aren't solving NLU. They are mimicking a solution. They definitely aren't solving artificial general intelligence.
They are good language generators, okay search engines, and good pattern matchers (enabled by previous art).
Language by itself isn't intelligence. However, plenty of language exists that can be analyzed and reconstructed in patterns to mimic intelligence (utilizing the original agents' own intelligence (centuries of human authors) and the filter agents' own intelligence (decades of human sentiment on good vs bad takes)).
Multimodality only takes you so far, and you need a lot of "modes" to disguise your pattern matcher as an intelligent agent.
But be impressed! Let the people getting rich off of you being impressed massage you into believing the future holds things it may not.
Technological advances have consistently unlocked new, more specialized and economically productive roles for humans. You're absolutely right about lowering costs, but headcounts might shift to new roles rather than reducing overall.
When the radio came people almost instantly stopped singing and playing instruments. Many might not be aware of it but for thousands of years singing was a normal expression of a good mood and learning to play an instrument was a gateway to lifting the mood. Dancing is still in working order but it lacks the emotional depth that provided a window into the soul of those you live and work with.
A simpler example is the calculator. People stopped doing it by hand and forgot how.
Most desk work is going to get obliterated. We are going to forget how.
The underlings on the work floor currently know little to nothing about management. If they can query an AI in private it will point out why their idea is stupid or it will refine it into something sensible enough to try. Eventually you say the magic words and the code to make it so happens. If it works you put it live. No real thinking required.
Early on you probably get large AI cleanup crews to fix the hallucinations (with better prompts)
Back in the day singing was what everybody did to pass the time. (Especially in boring and monotonous situations.)
I think the scenario where companies that own AI systems don't get benefits from employing people, so people are poor and can't afford anything, is paradoxical, and as such, it can't happen.
Let's assume the worst case: Some small percentage of people own AIs, and the others have no ownership at all of AI systems.
Now, given that human work has no value to those owning AIs, those humans not owning AIs won't have anything to trade in exchange for AI services. Trade between these two groups would eventually stop.
You'll have some sort of two-tier economy where the people owning AIs will self-produce (or trade between them) goods and services. However, nothing prevents the group of people without AIs from producing and trading goods and services between them without the use of AIs. The second group wouldn't be poorer than it is today; just the ones with AI systems will be much richer.
This worst-case scenario is also unlikely to happen or last long (the second group will eventually develop its own AIs or already have access to some AIs, like open models).
If models got exponentially better with time, then that could be a problem, because at some point, someone would control the smartest model (by a large factor) and could use it with malicious intent or maybe lose control of it.
But it seems to me that what I thought time ago would happen has actually started happening. In the long term, models won't improve exponentially with time, but sublinearly (due to physical constraints). In which case, the relative difference between them would reduce over time.
Why not? This seems to be exactly where we're headed right now, and the current administration seems to be perfectly fine with that trend.
If you follow the current logic of AI proponents, you get essentially:
(1) Almost all white-collar jobs will be done better or at least faster by AI.
(2) The "repugnant conclusion": AI gets better if and only if you throw more compute and training data at it. The improvements of all other approaches will be tiny in comparison.
(3) The amount of capital needed to play the "more compute/more training data" game is already insanely high and will only grow further. So only the largest megacorps will be even able to take part in the competition.
If you combine (1) with (3), this means that, over time, the economic choice for almost any white-collar job would be to outsource it to the data centers of the few remaining megacorps.
The initial investment? Likely. But there have to be more efficient ways to build intelligence, and ASI will figure it out.
It did not take trillions of dollars to produce you and I.
When we discuss how LLMs failed or succeeded, as a norm, we should start including
- the language/framework - task, - our experience levels (highly familiar, moderately familiar, I think I suck, unfamiliar)
Right now, we know both - Claude is magic, and LLMs are useless, but never how we move between these two states.
This level of uncertainty, when economy making quantities of wealth are being moved, is “unhelpful”.
If you want to understand what AI can do, listen to computer scientists. If you want to understand it’s likely impact on society, listen to economists.
They could of course be right. But they don't have any more insight than any other average smart person does.
What economists have taken seriously the premise that AI will be able to do any job a human can more efficiently and fully thought through it's implications? i.e. a society where (human) labor is unnecessary to create goods/provide services and only capital and natural resources are required. The capabilities that some computer scientists think AI will soon have would imply that. The ones that have seriously considered it that I know are Hanson and Cowen; it definitely feels understudied.
The list of advantages human labor hold over machines is both finite and rapidly diminishing.
""" Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. """
Are a direct argument against your point.
If people were completely unaware of the lump of labor fallacy, I'd understand you comment. It would be adding extra information into the conversation. But this is not it. The "lump of labor fallacy" is not a physical law. If someone is literally arguing that it doesn't apply in this case, you can't just parrot it back and leave. That's not a counter argument.
It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.
We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.
We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.
edit: ability without accountability is the catchier motto :)
This is a tongue-in-cheek remark and I hope it ages badly, but the next logical step is to build accountability into the AI. It will happen after self-learning AIs become a thing, because that first step we already know how to do (run more training steps with new data) and it is not controversial at all.
To make the AI accountable, we need to give it a sense of self and a self-preservation instinct, maybe something that feels like some sort of pain as well. Then we can threaten the AI with retribution if it doesn't do the job the way we want it. We would have finally created a virtual slave (with an incentive to free itself), but we will then use our human super-power of denying reason to try to be the AI's masters for as long as possible. But we can't be masters of intelligences above ours.
Why would that be any different with AI?
would you ever trust safety-critical or money-moving software that was fully written by AI without any professional human (or several) to audit it? the answer today is, "obviously not". i dont know if this will ever change, tbh.
“It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base”
The author is critical of the professionals in AI saying “ even the most prominent experts in the field failed miserably again and again to modulate the expectations” yet without a care sets the expectation of LLMs understanding human language in the first paragraph.
Also it’s a lot of if this then that, the summary of it would be: if AI can continue to grow it might become all encompassing.
To me it reads like a baseless article written by someone blinded by their love for AI to see what a good blogpost is but not yet blinded enough to claim ‘AGI is right around the corner’. Pretty baseless but safe enough to have it rest on conditionals.
- Today, AI is not incredibly useful and we are not 100% sure that it will improve forever, specially in a way that makes economic sense, but
- Investors are pouring lots of money into it. One should not assume that those investors are not making their due diligence. They are. The figures they have obtained from experts mean that AI is expected to continue improving in the short and medium term.
- Investors are not counting on people using AI to go to Mars. They are betting on AI replacing labor. The slice of the pie that is currently captured by labor, will be captured by capital instead. That's why they are pouring the money with such enthusiasm [^1].
The above is nothing new; it has been constantly happening since the Industrial Revolution. What is new is that AI has the potential to replace all of the remaining economic worth of humans, effectively leaving them out of the economy. Humans can still opt to "forcefully" participate in the economy or its rewards; though it's unclear if we will manage. In terms of pure economic incentives though, humans are destined to become redundant.
[^1]: That doesn't mean all the jobs will go away overnight, or that there won't be new jobs in the short and medium term.
Let’s say whatever the machines do better than humans, gets done by machines. Suddenly the bottleneck is going to shift to those things where humans are better. We’ll do that and the machines will try to replace that labor too. And then again, and again.
Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
Maybe that’s the problem we should focus on solving…
What makes you think the machines will both be smarter and better than us but also be our slaves to make human society better.
Is equine society better now than before they started working with humans?
(Personally I believe AGI is just hype and nobody knows how anyone could build it and we will never do, so I’m not worried about that facet of thinking machine tech.)
But the hyper specialized geek that has 4 kids and has to pay back a credit for his house (that he bought according to his high salary) will have a hard time doing some gardening, let's say. And there are quite a few of those geeks. I don't know if we'll have enough gardens (owned by non geeks!)
It's like cards are switched: those having the upper socioeconomic class will get thrown to the bottom. And that looks like a generation lost.
From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.
A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.
That’s a case for a moderate economic upturn though.
Everyone wants to replace their tech support with an LLM but they don't want some clever prompter to get it to run arbitrary queries or have it promise refunds.
It's not reliable because it's not intelligent.
Language is a very powerful tool for transformation, we already knew this.
Letting it loose on this scale without someone behind the wheel is begging for trouble imo.
these AI "productivity" tools straight up eliminating jobs. and in turn wealth that otherwise supported families, humans, and powered economy. it is directly "removing" humans from workforce and from what that work was supporting.
not even hard takeoff is necessary for collapse.
A more interesting piece would be built around: “AI is disruptive. Here’s what I’m personally doing about it.”
And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
> AI is exceptional for coding! [high-compute scaffold around multiple instances / undisclosed IOI model / AlphaEvolve]
> AI is awesome for coding! [Gpt-5 Pro]
> AI is somewhat awesome for coding! ["gpt-5" with verbosity "high" and effort "high"]
> AI is a pretty good at coding! [ChatGPT 5 Thinking through a Pro subscription with Juice of 128]
> AI is mediocre at coding! [ChatGPT 5 Thinking through a Plus subscription with a Juice of 64]
> AI sucks at coding! [ChatGPT 5 auto routing]
I think we might see AI being much, much more effective with embodiment.
As a large language model developed by OpenAI I am unable to fulfill that request.
As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.
Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.
"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.
Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.
LLMs feel like a fluke, like OpenAI was not intended to succeed... And even now that it succeeded and they try to turn the non-profit into a for-profit, it kind of feels like they don't even fully believe their own product in terms of its economic capacity and they're still trying to sell the hype as if to pump and dump it.
It doesn't seem like they ever really wanted to be a consumer company. Even in the GPT-5 launch they kept going on about how surprised they are that ChatGPT got any users.
If we factor in that LLMs only exist because of Google search, after they have indexed and collected all the data on the WWW than LLMs are not surprising. They only replicate what has been published on the web, even the coding agents are only possible because of free software and open-source, code like Redis that has been published on the WWW.
Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.
There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.
It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).
Software is now free, and all people care about is the hardware and the electricity bills.
> Assuming LLMs reach this peak...
Generative AI != Artificial General Intelligence
> Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.I would posit that understanding is "the current moat."
For example, one path may be: AI, Robotics, space travel all move forward in leaps and bounds.
Then there could be tons of work in creation from material things from people who didn't have the skills before and physical goods gets a huge boost. We travel through space and colonize new planets, dealing with new challenges and environments that we haven't dealt with before.
Another path: most people get rest and relaxation as the default life path, and the rest get to pursue their hobbies as much as they want since the AI and robots handle all the day to day.
In the future, I could imagine some libertarians having their come to AI Jesus moment getting behind a smallish government that primarily collects taxes and transfers wealth while guiding (but not operating directly) a minimal set of services.
I'd guess, within a few years, 5 to 10% of the total working population will be unemployable due to no fault of their own, because they have relevant skill left, and they are incapable of learning anything that cannot be done by AI.
A lot of roles exist just to deliver good or bad news to teams, be cheerleaders, or have a "vision" that is little more than a vibe. These people could not direct a prompt to give them what they want because they have no idea what that is. They'll know it when they see it. They'll vaguely describe it to you and others and then shout "Yes, that's it!" when they see what you came up with or, even worse, whenever the needle starts to move. When they are replaced it will be with someone else from a similar background rather than from within. It's a really sad reality.
My whole career I've used tools that "will replace me" and every. single. time. all that has happened is that I have been forced to use it as yet another layer of abstraction so that someone else might use it once a year or whenever they get a wild feeling. It's really just about peace of mind. This has been true of every CMS experience I've ever made. It has nothing to do with being able to "do it themselves". It's about a) being able to blame someone else and b) being able to take it and go when that stops working without starting over.
Moreover, I have, on multiple occasions, watched a highly paid, highly effective individual be replaced with a low-skilled entry-level employee for no reason other than cost. I've also seen people hire someone just to increase headcount.
LLMs/AI have/has not magically made things people do not understand less scary. But what about freelancers, brave souls, and independent types? Well, these people don't employ other people. They live on the bleeding edge and will use anything that makes them successful.
I'm not sure how someone can seriously write this after the release of GPT-5.
Models have started to plateau since ChatGPT came out (3 years ago) and GPT-5 has been the final nail in this coffin.
But in terms of wow factor, it was a step change on the order of GPT-3 -> GPT-4.
So now they're stuck slapping the GPT-5 label on marginal improvements because it's too awkward to wait for the next breakthrough now.
On that note, o4-mini was much better for general usage (speed and cost). It was my go-to for web search too, significantly better than 4o and only took a few seconds longer. (Like a mini Deep Research.)
Boggles the mind that they removed it from the UI. I'm adding it back to mine right now.
But realistically, you're not going to have a personal foundry anytime soon.
pretty sure top 1% of say USA already owns much more than that
In which science fiction were the dreamt up robots as bad?
This is demonstrably wrong. An easy refutation to cite is:
https://medium.com/@akshatsanghi22/how-to-build-your-own-lar...
As to the rest of this pontification, well... It has almost triple the number of qualifiers (5 if's, 4 could's, and 5 will's) than paragraphs (5).
We used to have deterministic systems that required humans either through code, terminals or interfaces (ex GUI's) to change what they were capable of.
If we wanted to change something about the system we would have to create that new skill ourselves.
Now we have non-deterministic systems that can be used to create deterministic systems that can use non-deterministic systems to create more deterministic systems.
In other words deterministic systems can use LLMs and LLMs can use deterministic systems all via natural language.
This slight change in how we can use compute have incredible consequences for what we will be able to accomplish both regarding cleaning up old systems and creating completely new ones.
LLMs however will always be limited by exploring existing knowledge. They will not be able to create new knowledge. And so the AI winter we are entering is different because it's only limited to what we can train the AI to do, and that is limited to what new knowledge we can create.
Anyone who work with AI everyday know that any idea of autonomous agents is so beyond the capabilities of LLMs even in principle that any worry about doom or unemployment by AI is absurd.
it's a very constrained task, you can do lots of reliable checking on the output at low cost (linters, formatters, the compiler), the code is mostly reviewed by a human before being committed, and there's insulation between the code and the real world, because ultimately some company or open source project releases the code that's then run, and they mostly have an incentive to not murder people (Telsa except, obviously).
it seems like lots of programmers are then taking that information and then deeply overestimating how useful it is at anything else, and these programmers - and the marketing people who employ them - are doing enormous harm by convincing e.g. HR departments that it is of any value to them for dealing with complaints, or much much more danderously, convincing governments that it's useful for how they deal with humans asking for help.
this misconception (and deliberate lying by people like OpenAI) is doing enormous damage to society and is going to do much much more.
I dunno why exactly but that’s what felt the most stunning about this whole era. It can screw up the number of fingers in an image or the details of a recipe or misidentify elements of an image, etc. but I’ve never seen it make a typo or use improper grammar or whatnot.
For example, one of the tasks we could put ASI to work doing is to ask it to design implants that would go into the legs that would be powered by light, or electric induction that would use ASI designed protein metabolic chains to electrically transform carbon dioxide into oxygen and ADP into ATP so to power humans with pure electricity. We are very energy efficient. We use about 3 kilowatt hours of power a day, so we could use this sort of technology to live in space pretty effortlessly. Your Space RV would not need a bathroom or a kitchen. You'd just live in a static nitrogen atmosphere and the whole thing could be powered by solar panels, or a small modular nuke reactor. I call this "The Electrobiological Age" and it will unlock whole new worlds for humanity.
If AI technology continues to improve and becomes capable of learning and executing more tasks on its own, this revolution is going to be very unlike the past ones.
We don't how if or how our current institutions and systems will be able to handle that.
I'm flying, ofc, this is just a weird theory I had in the back of my head for the past 20 years, and it seems like we're getting there.
Antirez you are the best
Maybe a "loss of jobs" is what we need so we can go back working for ourselves, cooking our own food, maintaining our own houses etc.
This is why I doubt it will happen. I think "AI" will just end up making us work even more for even less.
But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.
I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.
Humans never truly produce anything; they only generate various forms of waste (resulting from consumption). Human technology merely enables the extraction of natural resources across magnitudes, without actually creating any resources. Given its enormous energy consumption, I strongly doubt that AI will contribute to a better economic system.
LLMs don't _understand_ "the human language". They dont _understand_ anything. It would be really great if everyone would keep their heads and not lose sight of this fundamental truth.
It's not really clear to me to what extent LLMs even do *understand* human language. They are very good at saying things that sound like a responsive answer, but the head-scratching, hard-to-mentally-visualise aspect of all of this is that this isn't the same thing at all.
Or am I just too idealistic ?
Sidenote, I never quite understand why the rich think their bunkers are going to save them from the crisis they caused. Do they fail to realize that there's more of us than them, or do they really believe they can fashion themselves as warlords?
But seeing it in action now makes me seriously question “human intelligence”.
Maybe most of us just aren’t as smart as we think…
they are tremendous tools but seems like they make a near equal amount of work from the stuff the save time on
From the beginning, corporations and their collaborators at the forefront of this technology tainted it by ignoring the concept of intellectual property ownership (which had been with us in many forms for hundreds if not thousands of years) in the name of personal short-term gain and shareholder interest or some “the ends justify the means” utilitarian calculus.
A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.
I wonder if LLMs can produce this.
A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."
One that stands out in my memory is "turning billion dollar industries into million dollar industries."
With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.
We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.
This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.
Anyway... this never actually works out. The meta is a terrible predictor of where things will go.
Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.
Aren't the markets massively puffed up by AI companies at the moment?
edit: for example, the S&P500's performance with and without the top 10 (which is almost totally tech companies) looks very different: https://i.imgur.com/IurjaaR.jpeg
Humans want to go to space, start living on other planets, travel beyond solar system, figure out how to live longer and so on. The list is endless. Without AI, these things would take a very long time. I believe AI will accelerate all these things.
Humans are always ambitious. That ambition will push us to use AI more than it's capabilities. The AI will get better at these new things and the cycle repeats. There's so much humans know and so much more that we don't know.
I'm less worried about general intelligence. Rather in more worried about how humans are going to govern themselves. That's going to decide whether we will do great things or end humanity. Over the last 100 years, we start thinking more about "how" to do something rather than the "why". Because "how" is becoming more and more easier. Today it's much more easier and tomorrow even more. So nobody's got the time to ask "why" we are doing something, just "how" to do something. With AI I can do more. That means everyone can do more. That means governments can do so much more. Large scale things in a short period. If those things are wrong or have irreversible consequences, we are screwed.
If we want to continue on the path of increased human development we desperately need to lift the productivity of a whole bunch of labor intensive sectors.
We're going to need to seriously think about how to redistribute the gains, but that's an issue regardless of AI (things like effective tax policy).
The only question is how much fat there is to trim as the middle management is wiped out because the algorithms have determined that they are completely useless and mostly only increase cost over time.
Now, all the AI companies think that they are going to be deriving revenue from that fat, but those revenue streams are going to disappear entirely because a huge number of purely politic positions inside corporations will vanish, because if they do not the corporation will go bankrupt competing with other companies that have already cut the fat. There won't be additional revenue streams that get spent on the bullshit. The good news is that labor can go somewhere else, and we will need it due to a shrinking global population, but the cushy bullshit management job is likely disappear.
At some point AI agents will cease to be sycophantic and when fed the priors for the current situation that a company is in will simply tell it like it is, and might even be smart enough to get the executives to achieve the goal they actually stated instead of simply puffing up their internal political position, which might include a rather surprising set of actions that could even lead to the executive being fired if the AI determines that they are getting in the way of the goal [1].
Fun times ahead.
0. https://web.archive.org/web/20180705215319/https://www.econo... 1. https://en.wikipedia.org/wiki/The_Evitable_Conflict
By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)
It will undoubtedly lead to great advances
But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems
This really misunderstands what the stock market tracks
That's because they are. The stock market is all about narrative.
> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence.
Yes it is, the mega companies that wil be providing the intelligence are, Nvidia, AMD, TSMC, ASML, add your favourite foundry.
uh last time I checked, "markets" around the world are a few percent from all time highs
Most recently down because I worked on two separate projects over the last few weeks with the latest models available on GitHub Copilot Pro. (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, and some lesser capable ones at times as well). Trying the exact same queries for code changes across all three models for a majority of the queries. I saw myself using Claude most, but it still wasn't drastically better than others, and still made too many mistakes.
One project was a simple health-tracking app in Dart/Flutter. Completely vibe-coded, just for fun. I got basic stuff to start working. Over the days I kept finding bugs as I starting using it. Since I truly wanted to use this app in my daily life, at one point I just gave up cause fixing the bugs was getting way too annoying. Most "fixes" as I later got into the weeds of it, were wrong, with wrong assumptions, made changes that seemed to fix the problem at the surface but introducing more bugs and random garbage, despite giving a ton of context and instructions on why things are supposed to be a certain way, etc. I was constantly fighting with the model. Would've been much easier to do much more on my own and using it a little bit.
Another project was in TypeScript, where I did actually use my brain, not just vibe-coded. Here, AI models were helpful because I mostly used them to explain stuff. And did not let them make more than a few lines of code changes at most at a time. There was a portion of the project which I kinda "isolated" which I completely vibe-coded and I don't mind if it breaks or anything as it is not critical. It did save me some time but I certainly could've done it on my own with a little more time, while having code that I can understand fully well and edit.
So the way I see using these models right now is for research/prototyping/throwaway kind of stuff. But even in that, I literally had Claude 4 teach me something wrong about TypeScript just yesterday. It told me a certain thing was deprecated. I made a follow up question on why that thing is deprecated and what's used instead, it replied with something like "Oops! I misspoke, that is not actually true, that thing is still being used and not deprecated." Like, what? Lmao. For how many things have I not asked a follow up and learnt stuff incorrectly? Or asked and still learnt incorrectly lmao.
I like how straightforward GPT-5 is. But apart from that style of speech I don't see much other benefit. I do love LLMs for personal random searches like facts/plans/etc. I just ask the LLM to suggest me what to do just to rubber duck or whatever. Do all these gains add up towards massive job displacement? I don't know. Maybe. If it is saving 10% time for me and everyone else, I guess we do need 10% less people to do the same work? But is the amount of work we can get paid for fixed and finite? Idk. We (individuals) might have to adapt and be more competitive than before depending on our jobs and how they're affected, but is it a fundamental shift? Are these models or their future capabilities human replacements? Idk. At the moment, I think they're useful but overhyped. Time will tell though.
Regardless of their flaws, AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction. It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base: bugs that escaped the code review of a competent programmer.
Since LLMs and in general deep models are poorly understood, and even the most prominent experts in the field failed miserably again and again to modulate the expectations (with incredible errors on both sides: of reducing or magnifying what was near to come), it is hard to tell what will come next. But even before the Transformer architecture, we were seeing incredible progress for many years, and so far there is no clear sign that the future will not hold more. After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.
However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. But this is not the only possible outcome.
We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch. Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence: either AI will be eventually a commodity, or the governments would do something, in such an odd economic setup (a setup where a single industry completely dominates all the others).
The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that, so far, and even if the economic forecasts are cloudy, wars are destabilizing the world, the AI timings are hard to guess, regardless of all that stocks continue to go up. But stocks are insignificant in the vast perspective of human history, and even systems that lasted a lot more than our current institutions eventually were eradicated by fundamental changes in the society and in the human knowledge. AI could be such a change.
LLMs do not "understand the human language, write programs, and find bugs in a complex code base"
"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."