That's because we prefer improved living standards over less work. If we only had to live by the standards of one century ago or more, we could likely accomplish that by working very little.
That's more because we are never given the chance. We only get to keep working or fall of the rat race and at best be delegated to Big Lebowski style pariah existance.
Is that trend still true? I can look from the 50s to 2000s and buy into it. I'm not clear it is holding true by all metrics beyond the 2000s, and especially beyond maybe the 2020s. Yes, we have better tech, but is life actually better right now? I think you could make the argument that we were in a healthier and happier society in that sweet spot from 95 - 2005 or so. At least in NA.
We've seen so much technological innovation, but cost of living has outpaced wages, division is rampant, and the technology innovations we have have mostly been turned against us to enshitify our lives and entrap us in SaaS hell. I'd argue medical science has progressed, but also become more inaccessible, and, somehow, people believe in western medicine LESS. Does not help that we've also seen a decline in education.
So do we still prefer improving our standards of living in the current societal framework?
Given actual alternatives, workers have made their preferences clear.
Culture also plays a part - America is uniquely mercantile and business first. Workers and citizens in other countries have made different choices.
Yeah I know many people who do in the small town I live in. Mostly elderly who are used to it still, but also some young people who want to work just enough to buy what they need and not 1 minute more. I could've retired at <20 if I would've enjoyed that. Now I enjoy it more; it's kind of relaxing that kind of lifestyle; not because of not working but because of needing nothing outside your humble possessions.
Living quarters, transportation, healthcare, food. What were theses figures in 1926, and how much work is needed to achieve them.
Last year the US voted to hand over the reigns, in all branches of government, to a party whose philosophy is to slash government spending and reduce people’s dependence on the government.
To all the US futurists who are fantasizing about a post-scarcity world where we no longer work, I’d like to understand how that fits in with the current political climate.
There is zero actual intentional reduction of dependence, just elimination of government support.
But that is a good thing.
A lot of people would not choose to work for half the time as they do now because they do actually like to buy things.
Issue is that virtually no company offers that deal unless you already have noteriety or money at the level of retiring anyway.
It's not our production capabilities that keep people hungry; it's either greed or the problem of distribution.
Automation will definitely amplify production but it'll certainly continue to make rich richer and poor, well, the same. As inequality grows, so too does the authoritarian need to control the differential.
I'm 0.6 centuries old and have never heard that said for existing tech. Human level AI could presumably do human work by definition but that's not the case before we get that, including now.
https://www.npr.org/2015/08/13/432122637/keynes-predicted-we...
However, most people want fruits and vegetables instead of getting rickets, goiter, and cholera from an 1800s diet. Many are even willing to work 80+ hours a week to do so.
You are in danger. Unless you estimate the odds of a breakthrough at <5%, or you already have enough money to retire, or you expect that AI will usher in enough prosperity that your job will be irrelevant, it is straight-up irresponsible to forgo making a contingency plan.
I'm baffled that so many people think that only developers are going to be hit and that we especially deserve it. If AI gets so good that you don't need people to understand code anymore, I don't know why you'd need a project manager anymore either, or a CFO, or a graphic designer, etc etc. Even the people that seem to think they're irreplaceable because they have some soft power probably aren't. Like, do VC funds really need humans making decisions in that context..?
Anyway, the practical reason why I'm not screaming in terror right now is because I think the hype machine is entirely off the rails and these things can't be trusted with real jobs. And honestly, I'm starting to wonder how much of tech and social media is just being spammed by bots and sock puppets at this point, because otherwise I don't understand why people are so excited about this hypothetical future. Yay, bots are going to do your job for you while a small handful of business owners profit. And I guess you can use moltbot to manage your not-particularly-busy life of unemployment. Well, until you stop being able to afford the frontier models anyway, which is probably going to dash your dream of vibe coding a startup. Maybe there's a handful of winners, until there's not, because nobody can afford to buy services on a wage of zero dollars. And anyone claiming that the abundance will go to everyone needs to get their head checked.
It's not the odds of the breakthrough, but the timeline. A factory worker could have correctly seen that one day automation would replace him, and yet worked his entire career in that role.
There have been a ton of predictions about software engineers, radiologists, and some other roles getting replaced in months. Those predictions have clearly been not so great.
At this point the greater risk to my career seems to be the economy tanking, as that seems to be happening and ongoing. Unfortunately, switching careers can't save you from that.
I do. Show me any evidence that it is imminent.
> or you expect that AI will usher in enough prosperity that your job will be irrelevant
Not in my lifetime.
> it is straight-up irresponsible to forgo making a contingency plan.
No, I'm actually measuring the risk, you're acting as if the sky is falling. What's your contingency plan? Buy a subscription to the revolution?
What contingencies can you really make?
Start training a physical trade, maybe.
If this the end of SWE jobs, you better ride the wave. Odds are you're estimate on when AI takes over are off by half a career, anyways.
It seems kind of like saying “I’m smarter than all the AIs in this one particular way.” If someone posted that, you would probably jump in to say they’re fooling themselves.
More likely they get fired for no reason, never rehired, and the people left get burned out trying to hold it all together.
If you fail as a "higher up" you're no longer higher up. Then someone else can take your place. To the extent this does not naturally happen is evidence of petty or major corruptions within the system.
The large, overwhelming majority of my team's time is spent on combing through these tickets and making sense of them. Once we know what the ticket is even trying to say, we're usually out with the solution in a few days at most, so implementation isn't the bottleneck, nowhere near.
This scenario has been the same everywhere I've ever worked, at large, old institutions as well as fresh startups.
The day I'll start worrying is when the AI is capable of following the web of people involved to translate what the vaguely phrased ticket that's been backlogged for God knows how long actually means
However as you point out we have no program-accessible source of data on who stakeholders, contributors, managers, etc. are and have to write a lot of that ourselves. For a smaller business perhaps one could write all of that down in an accessible way to improve this but for a large dynamic business it seems very difficult.
I've been doing stuff with recent models (gemini 3, claude 4.5/6, even smaller, open models like GLM5 and Qwen3-coder-next) that was just unthinkable a few months back. Compiler stuff, including implementing optimizations, generating code to target a new, custom processor, etc. I can ask for a significant new optimization feature in our compiler before going to lunch and come back to find it implemented and tested. This is a compiler that targets a custom processor so there is also verilog code involved. We're having the AI make improvements on both the hardware and software sides - this is deep-in-the-weeds complex stuff and AI is starting to handle it with ease. There are getting to be fewer and fewer things in the ticket tracker that AI can't implement.
A few months ago I would've completely agreed with you, but the game is changing very rapidly now.
I don't agree they have solved this problem, at all, or really in any way that's actually usable.
It's hard to predict how quickly it will be solved and by whom first, but this appears to be a software engineering problem solvable through effort and resources and time, not a fundamental physical law that must be circumvented like a physical sciences problem. Betting it won't be solved enough to have an impact on the work of today relatively quickly is betting against substantial resources and investment.
Plenty of things get substantial resources and investment and go nowhere.
Of course I could be totally wrong and it's solved in the next couple years, it's almost impossible to make these predictions either way. But I get the feeling people are underestimating what it takes to be truly intelligent, especially when efficiency is important.
Wonder what that means for meatspace.
Edit: Would also disagree this isn’t a physics problem. Pretty sure power required scales according to problem complexity. At a certain level of problem complexity we’re pretty much required to put enough carbon in the atmosphere to cook everyone to a crisp.
Edit 2: illustrative example, an Epic in Jira: “Design fusion reactor”
This is no different than onboarding a new member of the team, and I think openAI was working on that "frontier"
>We started by looking at how enterprises already scale people. They create onboarding processes. They teach institutional knowledge and internal language. They allow learning through experience and improve performance through feedback. They grant access to the right systems and set boundaries. AI coworkers need the same things.
And tribal knowledge will not be a moat once execs realize that all they need to do is prioritize documentation instead of "code velocity" as a metric (sure any metric gets goodhearted, but LLMs are great at sifting through garbage to find the high perplexity tokens).
>But context limitation is fundamental to the technology in its current form
This may not be the case, large enough context-windows plus external scratchpads would mostly obviate the need for true in context learning. The main issue today is that "agent harnesses" suck. The fact that claude code is considered good is more an indication of how bad everything else is. Tool traces read like a drunken newb brute-forcing his way through tasks. LLMs can mostly "one-shot" individual functions, but orchestrating everything is the blocker. (Yes there's progress in metr or whatever but I don't trust any of that, else we'd actually see the results in real-world open source projects).
LLMs don't really know how to interact with subagents. They're generally sort of myopic even with tool calls. They'll spend 20 minutes trying to fix build issues going down a rabbit hole without stepping back to think. I think some sort of self-play might end up solving all of these things, they need to develop a "theory of mind" in the same way that humans do, to understand how to delegate and interact with the subagents they spawn. (Today a failure case is agents often don't realize subagents don't share the same context.)
Some of this is certainly in the base model and pretraining, but it needs to be brought out in the same way RL was needed for tool use.
I look at my ticket tracker and I see basically 100% of it that can be done by AI. Some with assistance because business logic is more complex/not well factored than it should be, but most of the work that is done AI is perfectly capable of doing with a well defined prompt.
Live stream validation results as they come in
The body doesn't give much other than the high-level motivation from the person who filed the ticket. In order to implement this, you need to have a lot of context, some of which can be discovered by grepping through the code base and some of which can't:- What is the validation system and how does it work today?
- What sort of UX do we want? What are the specific deficiencies in the current UX that we're trying to fix?
- What prior art exists on the backend and frontend, and how much of that can/should be reused?
- Are there any scaling or load considerations that need to be accounted for?
I'll probably implement this as 2-3 PRs in a chain touching different parts of the codebase. GPT via Codex will write 80% of the code, and I'll cover the last 20% of polish. Throughout the process I'll prompt it in the right direction when it runs up against questions it can't answer, and check its assumptions about the right way to push this out. I'll make sure that the tests cover what we need them to and that the resultant UX feels good. I'll own the responsibility for covering load considerations and be on the line if anything falls over.
Does it look like software engineering from 3 years ago? Absolutely not. But it's software engineering all the same even if I'm not writing most of the code anymore.
That's a sign that you have spurious problems under those tickets or you have a PM problem.
Also, a job is a not a task- if your company has jobs which is a single task then those jobs would definitely be gone.
I think it's more nuanced than that. I'd say that - 0% can't be implemented by AI - but a lot of them can be implemented much faster thanks to AI - a lot of them can be implemented slower when using AI (because author has to fix hallucinations, revert changes that caused bugs)
As we learn to use these tools, even in their current state, they will increase productivity by some factor and reduce needs for programmers.
I have seen numerous 25-50% productivity boosts over my career. Not a single one of them reduced the overall need for programmers.
I can’t even think of one that reduced the absolute number of programmers in a specific field.
It's a coding agent that takes a ticket from your tracker, does the work asynchronously, and replies with a pull request. It does progressively understand the codebase. There's a pre-warming step so it's already useful on the first ticket, but it gets better with each one it completes.
The agent itself is done and working well. Right now I'm building out the infrastructure to offer it as a SaaS.
If anyone wants to try it, hit me up. Email is in my profile. Website isn't live yet, but I'm putting together a waitlist.
Um, you do realize that "the memory" is just a text file (or a bunch of interlinked text files) written in plain English. You can write these things out yourself. This is how you use AI effectively, by playing to its strengths and not expecting it to have a crystal ball.
Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.
Let's be charitable and assume that this type of fast food worker gets paid $50,000 a year. For that job to be displaced it needs to be performed by a robot that can be acquired for a reasonable capital expenditure such as $200,000 and requires no maintenance, upkeep, or subscription fees.
This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer. The reality is that they pay service technicians and professionals a lot of money to keep that equipment barely working.
It will happen to you.
This is correct. This also is a lot more complex than it sounds and creates a lot of work. Cooking those products creates byproducts that must be handled.
> and the cashiers have largely been replaced by self-order terminals so that employees no longer even need to speak rudimentary English
Yet most of the customers still have to interact with an employee because "the kiosk won't let me". Want to add Mac sauce? Get the wrong order in the bag? Machine took payment but is out of receipt paper? Add up all these "edge cases" and a significant amount of these "contactless" transactions involved plenty of contact!
> It will happen to you.
Any labor that can be automated should be. Humans are not supposed to spend their time doing meaningless tasks without a purpose beyond making an imaginary number go up or down.
Flipping burgers is WAY more demanding than I ever imagined. That's the danger of AI:
It takes jobs faster than creating new ones PLUS for some fields (like software development) downshifting to just about anything else is brutal and sometimes simply not doable.
Forget becoming manager at McDonald's or be even good at flipping burgers at the age of 40: you are competing with 20yr olds doing sports with amazing coordination etc
I have no idea what in the world you are talking about. Most 20 year olds working at McDonald’s are stoned and move at half a mile an hour whether it’s a lunch rush or it’s 2am. I worked retail for years before I finally switched full time to programming. It’s certainly not full of amazing motivated athletes with excellent coordination. You’re lucky if most of them can show up to work on time more than half the time.
As a white collar computer guy, I can waste some time on Reddit. Or go for a walk and grab coffee. Or let people know that I’m heading out for a couple of hours to go to the doctor. There are a LOT of little freedoms tha you take for granted if you haven’t worked a shitty minimum wage job. Getting on trouble for punching in one minute late, not being allowed to sit down, socializing too much when you’re not on a break.
I’m pretty sure that most tech employees would just quit when encountering a manager like that
Any job that is predominantly done on a computer though is at risk IMO. AI might not completely take over everything, but I think we'll see way fewer humans managing/orchestrating larger and larger fleets of agents.
Instead of say 20 people doing some function, you'll have 3 or 4 prompting away to manage the agents to get the same amount of work done as 20 people did before.
So the people flipping the burgers and serving the customers will be safe, but the accountants and marketing folks won't be.
And that's probably something most people are okay with. Work that can be automated should be and humans should be spending their time on novel things instead of labor if possible.
People are worried about white-collar not blue-collar jobs being replaced. Robotics is obviously a whole different field from AI.
I agree, but people are conflating the two. We have seen a lot of advancements in robotics, but as of current that only makes the economics worse. We're not seeing the complexity of robots going down and we're seeing the R&D costs going up, etc.
If it didn't make sense a few years ago to buy a crappy robot that can barely do the task because your business will never make money doing it, it probably doesn't make sense this year to buy a robot that still can't accomplish the tasks and is more expensive.
Being the hype-man that he is I assume he meant humanoid robots - I think he's being silly here, and the sentence made me roll my eyes.
It's a non reality in America's extremely piss poor restaurant industry. We have a competency crisis (the big key here) and worker shortage that SK doesn't, and they have far higher trust in their society.
> While a highly automated McDonald’s in South Korea (or the experimental "small format" store in Texas) might look empty, the total headcount remains surprisingly similar to a standard restaurant
Eventually that will change and the role of a customer service agent will be redefined.
In actual reality, McDonalds has already automated to a vast degree. People were talking about burger-flipping robots as a trope 30+ years ago. Their future has come, just not in the way imagined.
If the McDonalds franchises near me are anything to go by we went from a busy lunch rush needing a staff of 20 or so individuals to properly handle, to around half a dozen. At least a half reduction in peak staffing needs - nearly entirely due to various forms of automation and supply chain optimization. The latter of which is just another name for automation further upstream and abstracted from the point of sale.
> This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer.
Perhaps grills are the hardest bit to automate, so they may never not be staffed by humans. I'd argue some places have done a fairly good job "automating" this aspect too if you squint a little. Stuff like double-sided grills where the top comes down and cooks a burger from both sides at once. Doubles your line throughput. Call this mechanization if you want, but it's in the same bucket to me.
But look at soft drink machines. They are now fully automated with some locations able to go from 3-4 people staffing two machines during a busy lunch rush, down to a single person who simply puts caps on stuff coming off the tiny conveyor belt. Mistakes are also cut down to close to zero, including stuff like "less ice" or "more ice" customizations.
The locations I'm aware of now operate fryers on a rotation so the "wait for fresh fries" experience is a thing of the past. This probably wasn't a major capital investment - just an improvement in the automation of data collection, modeling, and demand prediction. Still an automation though, as it replaces some manager making those decisions.
Ordering kiosks are the obvious one everyone knows about, so not worth discussing. They are universal in large cities these days, and I'm starting to see them more and more even in small towns during road trips. App-based ordering is also not someone anyone predicted 20 years ago either. Locations went from 6-8 cashiers on duty down to 1 or 2.
It already happened. Fast food is getting more out of less workers, just as predicted. It just happened incrementally over decades. Sure, a typical fast food franchise will never be operated in a "lights out" style manner with a roving team of highly paid technicians simply responding to alerts. But the labor force has been reduced and optimized for efficiency, and will continue to be chipped away little by little as technology gets better.
If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.
> The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.
It might all wash out eventually, but eventually could be a long time with respect to anybody’s personal finances.
There exists some fact about the true value of AI, and then there is the capitalist reaction to new things. I'm more wary of a lemming effect by leaders than I am of AI itself.
Which is pretty much true of everything I guess. It's the short sighted and greedy humans that screw us over, not the tech itself.
Or would you just do more stuff?
I feel like most software projects have an endless backlog.
Better IDEs, programming languages, packages, frameworks, etc have increased our productivity, reduced bugs -- but rarely reduced headcount.
Ever hard anyone migrate from php+jQuery to react+node and reduce head count due to increased productivity?
I sometimes reminiscent about the LAMP stack being super productive. But at the time I didn't write tests :)
The remaining "surplus" 20% roles retained will then be devoted to developing features and implementing fixes using AI where those features and fixes would previously not have been high enough priority to implement or fix.
When the price of implementing a feature drops, it becomes economically viable (and perhaps competitively essential) to do so -- but in this scenario, AI couldn't do _all_ the work to implement such features so that's why 40% rather than 20% of the developer roles would be retained.
The 40% of developer roles that remain will, in theory, be more efficient also because they won't be spending as much time babysitting the "lesser" developers in the 60% of the roles that were eliminated. As well, "N" in the Mythical Man Month is reduced leading to increased efficiency.
(No, I have no idea what the actual percentages would be overall, let alone in a particular environment - for example, requirements for Spotify are quite different than for Airbus/Boeing avionics software.)
"Work" isn't a finite thing. It's not like all the people in your office today had to complete 100% of their tasks, and all of them did.
"Work" is not a static thing. At least not in positions of many knowledge-worker careers.
The idea of a single day's unit of "work" being 100%, is really sophomoric.
Also, If 100% of a labor force now has 80% more time...wouldn't it behoove the company to employ the existing workforce in more of the revenue generating activities? Or find a way to retain as much of the institutional knowledge?
Doom, fear-mongering and hopelessness is not a sustainable approach.
Amazon fulfillment centers are a good example of automation shrinking the role of humans. We haven't seen total headcounts go down because Amazon itself has been growing. While the human role shrinks, the total business grows and you tread water. But at some point, Amazon will not be able to grow fast enough to counterbalance the shrinking human role in the FC and total headcount will decrease until one day it disappears entirely.
Lest we forget, software engineers aren't exactly ordinary people: they make quite a bit above the median wage.
AI taking our jobs is scary because it will turn us into "ordinary people". And ordinary people are not ok. They're barely surviving.
Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.
E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.
LLMs don't create anything new, they simply replace human computer i/o, with tokens. That's it, leaving the humans who are replaced to fight for a limited number of jobs. LLMs are not creating new jobs, they only create "AI automate {insert business process} SaaS" that are themselves heavily automated.. I suppose there are more datacenter jobs (for now), and maybe some new ML researcher positions.. but I don't really see job growth.. Are we supposed to just all go work at a datacenter or in the semiconductor industry (until they automate that too)?
You are thinking too linearly. When the price of goods and services go down because the cost to produce those goods are services decreases, that means things are cheaper. Now that things are cheaper we have more money to spend on other goods or services.
Who knows what industries will be created because of this alleged release of human labor.
When the refrigerator was invented we didn’t just replace an industry of shipping ice, we created new industries that relied on refrigeration. That’s creative destruction. That’s economic growth.
This is not to mention that I find the scope and scale of AI displacement to be highly dubious and built on hype.
Do you walk around with a blindfold on? Are you extremely privileged? Sounds like it. Tell this to the 25% of new college grads that have been unemployed for 12 months, or working as a barista with 100k in debt. Eventually they'll be knocking on your penthouse/mansion door.
- there's no thought given to what happens in the interim. Forget the welfare of those displaced, consider what acts the desperation will lead them to.
- these replacement roles may very well never exist or will pay much, much lower than they do now.
- this disruption happens entirely in services, LLMs are not improving agricultural yield, most industries steeped in physical reality will mostly cut overhead for generating text.
- the gains from automation do not necessarily have to diffuse over us all, the capital can simply accumulate in the hands of the firms.
You cannot keep pointing to the past when you are suggesting an entirely new never before seen moment is upon us.
What DOES go up with automation is demand. Fewer farmers today than 100 years ago, but significantly more mouths to feed.
What also increases is new kinds of jobs; entirely new fields. The automobile shrank the number of buggy whip makers, but taxi drivers increased. Then the internet increased Uber drivers on top of taxi drivers.
Get ready for french revolution v2, but global, the ruling class only exists because the working class tolerates them. This just won't work.
The jobs of the future may be that you're a court jester for Larry Ellison, or that you do something else that's fundamentally pointless but happens to be something that a person with money wants. Companion, entertainment, errands. Now, that may sound dystopian, but on some level, so are most white collar jobs today. Microsoft employs 200k people. How many of these are directly involved in shipping money-making products - five percent? Ten? The rest is there essentially for the self-sustaining bureaucracy itself. And there's no reason for that bureaucracy to exist except the whims of people with money and power - delegation, empire-building, pet projects, etc.
And I know datacenters and semiconductor manufacturing don't employ a lot of people, thats my point, the advent of llms replaces more jobs than it creates.
Datacenters are very automated. They already don't require many people and they're going to be needing less and less humans in them going forward.
Semiconductor manufacturing is also very heavily automated.
Software engineers work on Jira tickets, created by product managers and several layers of middle managers.
But the power of recent models is not in working on cogs, their true power is in working on the entire mechanism.
When talking about a piece of software that a company produces, I'll use the analogy of a puzzle.
A human hierarchy (read: company) works on designing the big puzzle at the top and delegating the individual pieces to human engineers. This process goes back and forth between levels in the hierarchy until the whole puzzle slowly emerges. Until recently, AI could only help on improving the pieces of the puzzle.
Latest models got really good at working on the entire puzzle - big picture and pieces.
This makes human hierarchy obsolete and a bottleneck.
The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.
Of course, it's not just about the software, but streams of information - customer support, bug tickets, testing, changing customer requirements.. but all of these can be handled by AI even today. And it will only get better.
This means different things depending on which angle you look at it - yes, it will mean companies will become obsolete, but also that each employee can become a company.
I’m a pretty big generalist professionally. I’ve done software engineering in a broad category of fields (Game engines, SaaS, OSS, distributed systems, highly polished UX and consumer products), while also having the experience of growing and managing Product and Design teams. I’ve worn a lot of hats over the years.
My most recent role I’m working on a net new product for the company and have basically been given fully agency over this product: from technical, budget, team, process, marketing, branding and positioning.
Give someone experienced like me capital, AI and freedom and you absolutely can build high quality software and a pretty blinding pace.
I’m starting to get the feeling than many folks struggling with adopting or embracing AI well for their job has more to do with their job/company than AI
Given the rest of your argument that makes no sense. Why should that one operator exist? If AI is good at big picture and the entire puzzle, I don’t see why that operator shouldn’t be automated away by the AI [company] itself?
I’m more worried that even if these tools do a bad job people will be too addicted to the convenience to give them up.
Example: recruiters locked into an AI arms race with applicants. The application summaries might be biased and contain hallucinations. The resumes are often copied wholesale from some chat bot or other. Nobody wins, the market continues to get worse, but nobody can stop either.
I don’t know if you can tell what’s “better,” with these tools.
At first, it's a pretty big energy hog and if you don't know how to work it, it might crash and burn.
After some time, the novelty wears off. More and more people begin using it because it is a massive convenience that does real work. Luddites who still walk or ride their bike out of principle will be mocked and scoffed.
Then the mandatory compliance will come. A government-issued license will be required to use it and track its use. This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.
Last will come the AI-integrated brain computer interface. You won't have any choice when machine-gun-wielding Optimus robots coral you into a self-driving Tesla bus to the nearest FEMA camp to receive your Starlink-connected Neuralink N1 command and control chip. You will be decapitated if you refuse the mark of the beast. Rev 20:4
That's just an American thing, I've never owned a car and most people of my age I know haven't either.
Choose to die
One of the things that drove the tech boom in the 2010s was cloud computing driving the cost of starting an internet company into the ground.
What happens when there’s software you think should exist, and you no longer need to hire a bunch of people at $150k-$250k per year to build it?
What happens when 200 out-of-work former software engineers take a look at your software and use LLMs to quickly build their own version each undercutting everyone else's prices in a race to the bottom?
Ai might not replace current work but it’s already replacing future hypothetical work. Now whether it can actually do that the job is besides the point in the short term. The way business models work is that if there’s an option to reduce your biggest cost (labour) you’d very much give it a go first. We might see a resurgence of labour if it turns out be all hype but for the short to medium term they’ll be a lot of disruption.
Think we’re already seeing that in employment data in the US, as new hiring and job creation slows. A lot of that will for sure be the current economic environment but I suspect (more so in tech focused industries) that will also be due to tech capex in place of headcount growth
I am worried about when they start wanting to make a profit on AI. I'm assuming we either have to pay the actual price for these things (I have no idea what that looks like, but I'm pretty sure it isn't $20 or $200 per month), or we have to put up with the full force advertising. Or most likely, we have to do both.
It'll be another one of those "I remember when..." stories we get to tell our kids. Like "I remember when emails were useful and exciting" or "I remember when I could order a taxi and it was clean, reliable and even came with a bottle of water..." or "I remember when I could have conversations with strangers on the internet that didn't instantly descend into arguments and hate".
This is exactly what chess experts like Kasparov thought in the late 90s: “a grandmaster plus a computer will always beat just a computer”. This became false in less than a decade.
Therefore, the best way to increase profit is to lower cost.
It also argues that models have existed for years and we're yet to see significant job loss. That's true, but AI is only now crossing the threshold of being both capable and reliable enough to be automate common tasks.
It's better to prepare for the disruption than the sink or swim approach we're taking now in hopes that things will sort themselves out.
for me the 2 main factors are:
1. whether your company's priority is growing or saving
- growing companies especially in steep competition fight for talent and ai productivity results in more hiring to outcompete
- saving companies are happy to cut jobs to save on margin due to their monopoly or pressure from investors
2. how 'sequence of tasks-like' your job is
- SOTA models can easily automate long running sequences of tasks with minimal oversight
- the more your job resembles this the more in-danger you are (customer service diffusion is just starting, but i predict this will be one of the first to be heavily disrupted)
- i'm less worried about jobs where your job is a 'role' that comes with accountability and requires you to think big picture on what tasks to do in the first place
There will also be far fewer positions demanding these skills. Easy access to generating code has moved the bottleneck in companies to positions & skills that are substantially harder to hire for (basically: Good Judgement); so while adding Agentic Sorcerers would increase a team's code output, it might be the wrong code. Corporate profit will keep scaling with slower-growing team sizes as companies navigate the correct next thing to build.
1 You are not affected somehow (you got savings, connections, not living paycheck to paycheck, and have food on the table).
2 You prefer to persue no troubles in matters of complexity.
Time will tell, is showing it already.
I know smart and capable people that have been unemployed for 6+ months now, and a few much longer. Some have been through multiple layoffs.
I am presently employed, but have looked for a job. The market is the worst I've seen in my almost 30 year career. I feel deeply for anyone who needs a new job right now. It is really bad out there.
Those who downplay it are either business owners themselves or have been employed for 2+ years.
I think a lot of software engineers who _haven't_ looked for jobs in the past few years don't quite realize what the current market feels like.
Does everyone really think that the world governments would allow any level job loss that would create panic before shutting this whole thing down within the area of their control?
It’s probably the western culture bias - people in UK or US have not seen or experienced big enough government intervention. US citizens are probably feeling a bit of the change now.
Secondly David Oks attended Masters School for his high school, an elite private boarding school with tuition currently running 72kUSD/year if you stay there the whole time, and 49kUSD/year if you go there just for schooling (https://en.wikipedia.org/wiki/Masters_School). I am going to generally say that people who were able to have 150k+ spent on their high school education (to say nothing of attending Oxford at 30kGBP/year for international student tuition) might just possibly be people who have enough generational family wealth that concerns like job losses seem pretty abstract or not something to really worry about.
It's just another in a long series of articles downplaying the risks of AI job losses, which, when I dig into the author's background, are written by people who have never known any sort of financial precarity in their lives, and are frequently involved AI investment in some manner.
That doesn't exactly bolster the author's position. Sure, there's already companies 30 years behind the curve.
But in an increasingly competitive and fast moving economy, "the human is slowing it down by orders of magnitude" doesn't exactly sound like a vote in favor of the human.
The self-setup here is too obvious.
This is exactly why man + machine can be much worse than just machine. A strong argument needs to address what we can do as an extremely slow operating, slow learning, and slow adapting species, that machines that improve in ability and efficiency monthly and annually will find they cannot do well or without.
It is clear that we are going through a disruptive change, but COVUD is not comparable. Job loss is likely to have statistics more comparable to the Black Plague. And sensible people are concerned it could get much worse.
I don’t have the answers, but acknowledging and facing the uncertainty head on won’t make things worse.
Here's an article:
https://history.wustl.edu/news/how-black-death-made-life-bet...
Not sure if there's an analogy to make somewhere though
Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.
On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).
There are humans that can't do any mental work that AI can't. Those humans are not useful for mental work and that's what can cause real AI job loss. The bar for being useful for mental work is increasing rapidly..
Jobs that are easy disappear and are replaced with jobs that are no longer as easy, either requiring more mental skills (that many people don't have) or are soul crushing manual jobs that are also getting harder constantly..
So yes, YOU are not worried, because you are privileged here.
AI will buy us some time from economic collapse, though on the bright side the environment can recover a bit since human growth was the worse stressor
That's a weird way of saying 80 million times.
And it's now at 80 million views! https://x.com/mattshumer_/status/2021256989876109403
It appears to have really caught the zeitgeist.
I work on this technology for my job and while I'm very bullish pieces like that are as you said slopish and as I'll say breathless because there are so many practical challenges here to deal with standing between what is being said there and where we are now.
Capability is not evenly distributed and it's getting people into loopy ideas of just how close we are to certain milestones, not that it's wrong to think about those potential milestones but I'm wary of timelines.
https://arxiv.org/abs/2510.15061
I thought normies would have caught onto the EM dash, overuse of semicolons, overuse of fancy quotes, lack of exclamation marks, "It's not X, it's Y", etc. Clearly I was wrong.
Did the 80 million people believe what they were reading?
Have we now transitioned to a point where we gaslight everyone for the hell of it just because we can, and call it, what, thought-provoking?
That is quite a optimistic view that I do not share. The US shitshow with epstein files shows what those with power are actually capable of. The star trek utopia universe is not the world we are building right now collectively. I would expect instead that with robotics and AI combined there will be a lot of more technical jobs for maintaining and building automated systems that serves rich people but not common folks. But still you need knowledge and skill to do that which means you still need to learn and teach those. Which means you still need education and people working there. You still need people that support education sector and technical and maintance sector for AI and robotics. All of them need to eat and have basic needs to be fulfilled. You need agriculture and services and housing and entertainment and dozens of others for that too. So in an essence the author is right but still with AI capable robots I would not expect utopia but somekind of world between blade-runner and alien: you won't be scrolling mindlessly while all you needs are being met but rather trying to save money for the things you dream off while working stupid mindless job you do not like. Which is basically what most of us are doing right now.
So yes nothing will change for most of us but humanity will find a way somehow to make world suck in so many ways by exploiting each other, by stealing from each other, by lying and generally making a world living hell for everyone. Because we do not know any better.
AI won't change that. So as the old saying goes: a lot have to change for everything to stay the same.
... for the 3rd year in a row. Feels like the new 'year of the Linux desktop'
Ordinary people are ALREADY not doing okay.
Maybe I am wrong, but the history of business on the web says I am right. If you go back and look at why those businesses think they are successful, and if that analysis is correct, then I am.
i'm not sure why it would be more amazing in 2016 than in 2023 where it... wasn't very amazing lol
they don't care about the majority losing jobs, or even starving to death so long as they ensure a great future for themselves and the people they, supposedly, care about.
Dear software programmers: 90% of your jobs are going away soon. Most of you are on the first step. Those of you who progress through these step the fastest will be most prepared for what is about to come.
I'm not worried about AI job loss in the programming space. I can use Claude to generate ~80% of my code precisely because I have so much experience as a developer. I intuitively know what is a simple mechanical change; that is to say, uninteresting editing of lines of code; as opposed to a major architectural decision. Claude is great at doing uninteresting things. I love it because that leaves me free to do interesting things.
You might think I'm being cocky. But I've been strongly encouraging juniors to use Claude as well, and they're not nearly as successful. When Claude suggests they do something dumb--and it DOES still suggest dumb things--they can't recognize that it's dumb. So they accept the change, then bang their head on the wall as things don't work, and Claude can't figure it out to help them. Then there are bad developers who are really fucked by Claude. The ones who really don't understand anything. They will absolutely get destroyed as Claude leads them down rabbit holes. I have specific anecdotes about this from people I've spoken to. One had Claude delete a critical line in an nginx config for some reason and the dev spent a week trying to resolve it. Another was tasked with doing a simple database maintenance script, and came back two weeks later (after constant prodding by teammates for a status update) with a Claude-written reimplementation of an ORM. That developer just thought they would need another day of churning through Claude tokens to dig themselves out of an existential hole. If you can't think like a developer, these tools won't help you.
I have enough experience to review Claude's output and say "no, this doesn't make sense." Having that experience is critical, especially in what I call the "anti-Goldilocks" zone. If you're doing something precise and small-scoped, Claude will do it without issues. If you try to do something too large ("write a Facebook for dogs app") Claude will ask for more details about what you're trying to do. It's the middle ground where things are a problem: Claude tries to fill in the details when there's something just fundamentally wrong with what it's being asked.
As a concrete example, I was working on a new project and I asked Claude to implement an RPC to update a database table. It did so swimmingly, but also added a "session.commit()" line... just kind in the middle of somewhere. It was right to do so, of course, since the transaction needed to be committed. And if this app was meant to a prototype, sure. But anyone with experience knows that randomly doing commits in the middle of business logic code is a recipe for disaster. The issue, of course, was not having any consistent session management patterns. But a non-developer isn't going to recognize that that's an issue in the first place.
Or a more silly example from the same RPC: the gRPC API didn't include a database key to update. A mistake on my part. So Claude's initial implementation of the update RPC was to look at every row in the table and find ones where the non-edited fields matched. Makes... sense, in a weird roundabout way? But God help whoever ends up vibe coding something like that.
The type of AI fears are coming from things like this in the original article:
> I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. [...] when I test it, it's usually perfect.
Which is great. How many developers are getting paid full-time to make new apps on a regular basis? Most companies, I assume, only build one app. And then they spend years and many millions of dollars working on that app. "Making a new app from scratch" is the easy part! What's hard is adding new features to that app while not breaking others, when your lines of code go from those initial tens of thousands to tens of millions.
There's something to be said about the cheapness of making new software, though. I do think one-off internal tools will become more frequent thanks to AI support. But developers are still going to be the ones driving the AI, as the article says.
That being said the real danger is not coming from AI today, it's more C-suites believing AI can just zero shot any problem you throw at it.
I’m definitely worried about job loss as a result of the AI bubble bursting, though.
Once techbros take it too far where an actual significant amount of people face job loss and thus face hardships in housing and feeding themselves, society as a whole is going to wish it nipped AI in the bud when it still could. Knowing techbros though, their moment of introspection, if it ever comes, will come far too late.
To me, actively trying to cause mass job loss in a country with essentially zero social security sounds, actively trying to get as many people in the "nothing to lose" state as possible, sounds genuinely suicidal.
The real world is much more resilient and stubborn. The industrial revolution indeed wiped out a lot of jobs. But it created a lot more new ones. Agriculture and food production no longer is >90% of the economy. The utopian version of that (we all get free food) never happened. The dystyopian version (we'll all starve) didn't happen either. And the Luddite version (we'll all go back to artisanal farming) didn't happen either. What happened is that well fed laborers went to work doing completely different stuff. Subsistence farming now only exists in undeveloped countries and regions in e.g. rural Africa.
The simple reality is that we have 8 billion people probably growing towards 10 billion. These people are going to buy and spend stuff with their income. Whatever that is, is what the economy is and what we collectively value. If AI puts us all out of work, people aren't going to sit on their hands and go back to subsistence farming. They'll fill the time with whatever is is that they can create income with so they can spend it on things that are valuable to them.
This notion of value is what is key. Because if AI lowers the cost of something, it simply becomes cheaper. We need a lot of valuable and scarce resources to power AI. That isn't cheap. So, there's an equilibrium of stuff that is valuable enough to automate with it that people still want to pay for by committing their valuable resources to it. Which as they become scarcer become more valuable and more interesting from an economic point of view. The economy adapts towards activity that facilitates value creation. We're opportunists. It all boils down to what we can do for each other that is valuable and interesting to us. Whatever that is, is where there will be a lot of growth.
I'm in software, I'm not worried about less work. I'm worried about handling the barrage of stuff I don't have time to do that I now need to start worrying about doing. There's no way I'm going to do any of that without AI. It's already generating more work than I can handle. This isn't frivolous stuff that I don't need, it's stuff that's valuable to my company because we can sell it to other companies who need that stuff.
At no point have worker rights and conditions advanced without being demanded, sometimes violently. The history of maritime safety is written in blood. The robber baron era was peppered with deadly clashes such as the Homestead Strike. As a reminder, we had a private paramilitary force for the wealthy called the Pinkerton Detective Agency (despite the name, they were hired thugs) that at it's peak outnumbered the US Army.
Heck, you can go back to the Black Death when there was a labor shortage to work farms and the English Crown tried to pass laws to cap wages to avoid "gouging" by peasants for their labor.
Automation could be very good for society. It could take away menial jobs so we all benefit. But this won't happen naturally because that's essentially a wealth transfer to the poor and the wealthy just won't stand for that.
No, what's going to happen is that AI specifically and automation in general will be used to suppress labor wages and furhter transfer wealth to the already wealthy. We don't need to replace everyone for this to happen. Displacing just 5% of the workforce has a massive effect on wages. The remaining 95% aren't asking for raises and they're doing more work for the same wages as they pick up whatever the 5% was doing.
We see this exact pattern in the permanent layoff culture in tech right now. At the top you have a handful of AI researchers who command $100M+ pay packages. The vast majority are either happy to still have a job or have been laid off, possibly multiple times, and spend a ton of time going through endless interview rounds for jobs that may not even exist.
This two-tiered society is very much in our near future (IMHO).
In the Depression you had wandering hoboes who were constantly moving, seeking temporary low-paid work and a meal. This situation was so bad we got real socialist change with the New Deal.
2008 killed the entry-level job market and it has yet to recover. That's why you see so many millenails with Masters degrees and a ton of student debt working as baristas. Covid popped the tech labor bubble, something tech companies had been wanting for a long time. Did you not notice that they all started doing layoffs at about the exact same time? Even when they're massively profitable?
So the author isn't worried about job loss? Delusional. We're teetering on the edge of complete societal collapse.
What happens when you have a surplus of able bodied young people who are angry and without purpose? What's the easiest way to divert all that anger and give them purpose at the same time?
People in developing nations worked around this by immigrating.
AI already serves as a surveillance tool and is being used by Palantir.