Just a basic sniff test though - If AI enables developer productivity that would translate to more revenue, reduced costs, reduced risk, etc. The bottom line numbers would get better. With more resources available your next move is to decrease spending on more productivity enhancements or revenue opportunities? They don't want more revenue? Doesn't add up.
The better headline would be: "Amazon CEO Andy Jazzy faced with poor financial outlook tries to convince the public that downsizing is due to improvements in AI"
Companies don't exist to benefit their employees (or their customers).
If that was true then the companies should never have been doing layoffs, as all these companies are generating tens of billions of dollars in revenue.
> The better headline would be: "Amazon CEO Andy Jazzy faced with poor financial outlook tries to convince the public that downsizing is due to improvements in AI"
This is assuming that companies have the capacity to keep increasing revenue by adding more workforce, which is just not true. At some point you hit diminishing returns with more workers. The same goes for Agent workers. To chase more revenue you need a lot more than just more SWEs and a lot of that is not currently similarly scalable.
You can legitimately argue "far less to do with", but it's definitely not nothing. There are countless projects underway where AI will allow for 10% reductions with zero business impact in the short term, and 25-40% reductions (sometimes more) by 2030.
The only logical explanation is that they don't have enough opportunities to utilize those people OR as I previously mentioned... their financials might look bad, and they are trying to make them look better so they don't take a hit in the markets.
Are there any where it empirically _has_ done, or are we still in jam tomorrow mode? Like, there is a very big industry devoted to selling this stuff; I'd be _extremely_ cautious about promises and projections.
For example now I have a ton of graphs and interactive UI pages that interact with my code. Made everyone’s lives easier, but at least in my case it was not a dealbreaker not having these, and frankly nobody was willing to pay for them.
So it really doesnt matter whats realistic. They want cheaper workers to live in fear.
Amazon is also way behind tech peers on AI. These sorts of puff PR pieces don’t do much to shake that reality.
What tech peers is Amazon way behind on AI? Neither MSFT nor AAPL have their own models. FB has no path to model monetization. GOOG is unique but that's it, and AWS might be able to better capitalize on AWS enterprise customers. Amazon was way behind yes but at this point they are positioned well enough to execute.
Google you’ve already covered, and Apple despite its faults has been designing and producing AI-targeted hardware for a decade and has a much clearer story for integrating AI into its lineup.
AWS has a scattered mess of Q-branded services and a consistent track record of shipping garbage enterprise apps like Workmail, Chime, Workdocs, Cognito, and arguably Quicksight. Bedrock APIs are frequently behind in features from their parent vendors, and Bedrock as a whole isn’t better than thousands of LLM management platforms that have already sprung up.
I’ll never fully bet against Amazon as the far and away cloud market leader, but their existing AI position is flimsy and their increasingly hostile position towards their workforce reeks of desperation.
https://www.reuters.com/business/retail-consumer/amazon-cons...
Fraction of that is Anthropic investment.
This take is getting old, and the story won't stick with folks till their desk is in a cardboard box...
Out of all the faangs, amz is the best positioned to remove staff and agentify the work they were doing. First, amz constantly churns the lower x%. They've been doing this for years now. They know what to count and who to fire. Second, amz has had everyone write a story about everything they do, day in, day out for years now. Change a lightbulb? Not without a story. Guess what you need for training LLMs? Yup, stories.
There are plenty of people writing stories and coordinating the writing of other stories. Those people will be the first out. It's never the top nor the bottom.
There are only two reasons I can think not to. First, if AI can fully replace a human in a role. But it seems like we're a long way away from that. Second, if the added productivity leaves you with nothing to do. But we're in tech. There's always something new to do. If you're not doing new things as a company, you're getting replaced by those who are.
So it seems like a losing strategy to make your workforce cost reduction your primary concern when we could see the greatest workforce productivity gain in modern times.
If AI is so useful that it can fully replace engineers or other humans, why aren’t products next level amazing?
If the barrier to entry for these high margin tech companies becomes so low that they no longer even need employees, isn’t the next step to compete on quality?
AI won’t fundamentally alter either of these facts.
More companies with smaller workforces would be better than fewer companies with larger workforces.
Now, instead of Employee A and B working together to solve Problem X, Company A's product and Company B's product must be used together to solve Problem X. At least the employees know each other and are in the same "white box". But software products are a blackbox, so the end result is almost certainly worse.
People are out there building useful stuff with AI but they don't work at Amazon
This article/comment isn't really the prompt, just a reminder of that it seems like a shtty place to put my funds and I'll soon be using AI to replace it anyway!
But the comments saying Claude can't replace some genius are irrelevant. The amount of SWEs at big tech itself is so high that law of averages dictate most people are not rockstars (and this is validated in my observations). Most SWEs just write internal RPC to internal RPC wrappers. I am seeing that everyone is relying a lot on these tools, and the new SWEs seem to utterly depend on them. HN users will always have some edge case pointed out but most of software is crud apps low scale (even big tech most internal tool is low scale) and these tools are definitely doing better than the median SWE I have encountered.
I mean, this is about the fourth "this will massively reduce the need for programmers" thing in the last 20 years. And it increasing feels like the previous ones; lots of hype, lots of marketing, very little empirical evidence that it's doing anything much.
For CRUD stuff _in particular_, people have been promising CRUD without icky programmers any day now for longer than most users of this website have been alive.
> Think of agents as software systems that use AI to perform tasks on behalf of users or other systems. Agents let you tell them what you want (often in natural language), and do things like scour the web (and various data sources) and summarize results, engage in deep research, write code, find anomalies, highlight interesting insights, translate language and code into other variants, and automate a lot of tasks that consume our time. There will be billions of these agents, across every company and in every imaginable field. There will also be agents that routinely do things for you outside of work, from shopping to travel to daily chores and tasks. Many of these agents have yet to be built, but make no mistake, they’re coming, and coming fast.
This is the same wishful thinking that AI companies are heavily marketing.
Nobody will want to use an "agent" that makes mistakes 60% of the time. Until the industry figures out a way to fix the problems that have plagued this technology since the beginning―which won't be solved by more compute, better data, or engineering hacks―this agentic future they've been promising is a pipe dream.
Do you think there's still confusion around it like there was a year ago?
Anthropic use the tools-in-a-loop one quite consistently now, but OpenAI still sometimes say things like "AI agents are AI systems that can do work for you independently. You give them a task and they go off and do it." - https://simonwillison.net/2025/Jan/23/introducing-operator/
What he hopes for is to just reduce the number of people they employ. So the "more people doing other types of jobs" just makes the message more palatable.
Suppose all companies follow the suite who is going to buy their crap?
There's no way that paying for a bunch of employees that you don't need, just so you can have some customers, is going to make sense. Even if you're operating a company town, only a fraction of their income is going to be spent on your company's goods/services, so you'll never be able to recoup the wage that way.
It's all in Adam Smith and economic history.
It makes more sense that Amazon would continue to push AI where it's already being used successfully. Devs may benefit from finding solutions quicker with AI, but it's never made sense to me why that would affect productivity per head or change hiring/firing rates.
Put another way: there are never enough devs and they write a lot of shitty code. AI writes even shittier code, but in subtly different ways and can write it even faster helping the dev iterate to better code.
The result is basically no change anywhere except a modest increase in quality. This is equivalent to, but cheaper than going on an epic quest to find the good devs and overpay them. Why is this a bad thing for like 99% of people who write code? There's basically no impact on their pay or ease of finding a job.
More money is spent at most of these companies coordinating work than actually doing work.
If you are working for a company that employs at least 1000 full-time engineers, I think you should consider joining a team where every project involves AI in some way, if you aren't already on one. Whether its owning AI tooling, or developing client features that use AI directly, or even just prototyping AI concepts that never launch. The safest roles like research and directly working on the models are out of reach for most people due to competition and position scarcity, but that's ok. There are so many positions downstream from those. The key thing to look for is to be in a position where your AI features can actually turn a profit, which might be rare, but not as difficult to get as an upstream role. But its still fine to be in a role that isn't profitable.
I think AI-adjacent roles will either be the first or last fulltime SWE jobs to go during the next tech downturn, which I don't think we are in yet. I am betting on the latter, because I think corporations will continue to reroute more and more funding towards AI all the way down. Even if the current AI cycle ends up as a failure, we are already in the sunk cost stages of commitment. There is no turning back without anything short of a total collapse.
So glad I left that place.
I believe the business leaders are seriously considering about this -- i.e. not necessarily just as an excuse to RIF, but they probably believe in this. Whether it is going to be successful is irrelevant.
I'm eagerly waiting for someone to talk about AI integration experiments within FAANG. I'm surprised no one has talked about it yet -- maybe there is some kind of NDA or the experiments are still in early stages. Once the experiments are proved to be marginally successful, I bet the leaders are going to start some mass layoffs -- or maybe worse, if they are pressured by stock prices, to do that and see what happens before anything conclusive.
To any team who is integrating AI into your company's data or doc -- please STOP and don't do that. I'm not talking about USING AI, but INTEGRATING AI.
CEOs can warn about AI replacing jobs until they're blue in the face, but people won't listen.
And when mass job losses finally arrive, people (including the CEOs) will be shocked and overwhelmed.
In fact, that is probably the reason that people unfortunately have learned not to listen. There's even a fable about it.
Although I do think that AI fits the pattern of "real big thing".
In general, cultural diffusion progresses in three stages: from insiders to money people to the public.
For example, great artists are recognized first by fellow artists and critics, then by art auctions, then by the broader public.
AI seems to be following a similar trajectory. AGI is felt first by insiders (AI researchers), then by money people (politicians and business leaders - we are here) then by the public (I'm guessing soon).
Your economic system is a joke
If AI truly comes in the current capitalistic system, there is no endgame. Ourobouros.
Amazon is a leader in global trade. I really hate to see what the 'We shouldn't have did this' outcome looks like by adding AI to it. Might be good, might be bad.
Discussed here: https://news.ycombinator.com/item?id=44289554
But would they ever admit such a failure in-front of shareholders who are still under the spell of "AI agents", "AGI", "ASI" bullshit?
I don't think so.
It's a pretty bland memo and a thinly veiled advertisement for AWS.