It just sounds like a giant scheme to burn through tokens and give money to the AI corps, and tech directors are falling for it immediately.
Cloud had a very similar vibe when it was really running advertising to CIO/CTOs hard. Everything had to be jammed into the cloud, even if it made absolutely no sense for it to be run there.
This seems to come pretty frequently from visionless tech execs. They need to justify their existence to their boss, and thus try to show how innovative and/or cost cutting they can be.
100% accurate - some of us are old enough to have lived through a few of the mini-revolutions in between the mega-revolutions of Internet/Web in the 1990s and now AI/LLM in the 2020s.
We are in the "stupid phase" of adoption still. C-level people have to follow the herd and they are being evaluated on keeping up with everyone else. Idiotic mandates are a way to cause things to happen short-term even though everyone knows long-term it will have to be re-done.
Consultants gonna make a looooooooot of money this coming decade.
Now you can make a perfectly tailored resume, apply to 50 jobs in a day, and it's not unexpected to not get any response from those in 2 weeks. You don't know if it's your resume, the company, or the economy. And no one wants to admit the latter two are problems.
Not to mention the utter disrespect these days. There's no decorum in many of these "professional" settings, when normally you want your interview process to show off your best face.
But is this something that is best done top to bottom, with a big report, counting tokens? Hell no. This is something that is better found, and tackled at the team level. But execs in many places like easy, visible metrics, whether they are actually helping or not. And that's how you find people playing JIRA games and such. My worse example was a VP has decided that looking at the burndown charts from each team under them, and using their shape as a reasonable metric is a good idea.
It's all natural signs of a total lack of trust, and thinking you can solve all of this from the top.
I’ve seen people use notepad and I’ve seen people who are so good at vim that they look like they’re on editing code directly with their mind.
Your particular example is extreme and my guess is the coworker is just not great at debugging. I use Claude all the time for finding bugs, but it fails fairly frequently though. I think there’s probably advantage to having some people who don’t use it that often, so you have someone to turn to when it fails.
I’m definitely not exercising my debugging skills as much as I used to and I’m fairly confident they’ve atrophied.
And ideally a sample large enough to capture any wasted time from dead ends in other tasks where the tool may actually fail to solve the problem.
I’ve definitely lost a couple hours here and there from when it felt like I was right on the verge of CC fixing something but never actually got there and finally had to just do it myself anyway.
Most execs didn't get where they were by being truly helpful and adding value to the company. They played the game long enough to know that politics trumps accomplishments. The rest from there is the ability to weave a good story (be it slightly or completely exaggerated).
It's not even about trust. It's about incentives in a structure that is dog-eat-dog. Rugged individualism in a corporate structure is a self defeating prophecy. But it's inevitable when executives extract from the company instead of rising the tides for all ships. And shareholders reward it.
This is exactly what's happening. The top 5 or 6 companies in the s&p 500 are running a very sophisticated marketing/pressure campaign to convince every c-suite down stream that they need to force AI on their entire organization or die. It's working great. CEOs don't get fired for following the herd.
S&P 500 Concentration Approaching 50% - https://news.ycombinator.com/item?id=47384002 - March 2026
> No of course there isn't enough capital for all of this. Having said that, there is enough capital to do this for a at least a little while longer. -- Gil Luria (Managing Director and Analyst at D.A. Davidson)
OpenAI Needs a Trillion Dollars in the Next Four Years - https://news.ycombinator.com/item?id=45394071 - September 2025 (8 comments)
Patric Boyle has a video on this in case you care for the details.
If you broaden the comparison (only a little bit) it looks suspiciously like employees being forced to train their own replacement (be that other employees, or factory automation), a regular occurrence.
Yes, they tend to be incredible gullible to certain things, over-simplistic and over-confident but also very "agile" when it comes to sweep their failures under the rug and move on to keep their own neck in one piece. At this point in time even the median CEO knows AI has been way overhyped and they over invested to a point of absolute financial insanity.
The first line of defense about the pressure to deliver is to mandate their minions to use it as much as possible.
We spent a fortune on this over-rated Michelin star reservation, and now you kids are going to absolutely enjoy it, like it or not goddammit!
93% of Developers Use AI Coding Tools. Productivity Hasn't Moved. - https://philippdubach.com/posts/93-of-developers-use-ai-codi... - March 4th, 2026
This started literally two weeks ago and a couple of days ago I talked to one of the admin people who wanted an update on the progress I'd made with sanding off some of the rough edges of the very rough implementation that the managing partner had put in place (he bought a Mac Mini, put OpenClaw on it, then gave it admin access to a whole pile of stuff!) I said I needed a couple more days. "Okay," she said, "but I need this quickly, because we're firing people next week."
They have literally gone from no agentic AI, to discovering OpenClaw, to firing people, in a two-week time span.
When economists say that the predicted job losses as a result of AI have not yet shown up in the data, I'm genuinely befuddled. Either we don't have long to wait to start seeing them, or there's something wrong with the data, because you can't tell me what I just described above is an isolated phenomenon.
I also have to say: I've always enjoyed working with this client, but this experience has been a huge turnoff on a number of different levels.
They had to hire a bunch of them back less than two months later. The speed-ups were approximately nil and making the editors edit AI slop all day long had them all close to quitting.
They didn't even wait to see if there were any actual benefits, they just blindly fired a bunch of people based on marketing lies. I can only assume they're the same sorts who fall for Nigerian Prince scams.
I’d have guessed the most annoying part would be that you’re assisting them in a hare brained scheme to terminate some people’s employment.
I bet we could replace nearly all the CEOs in the country with chatgpt controlling a ceo@thatcompany.com email and nobody would notice.
Funny enough, I got laid off last month, yes I’m a tech guy, now they apparently regret it because they are now scrambling to find a replacement to do the tech tasks!
TBH, I’m happy I got laid off because I’m finally building something I wanted to use.
It has often been the case for technologies though, like “now we’re doing everything in $language and $technology”. If you see LLM coding as a technology in that vein, it’s not a completely new phenomenon, although it does affect developers differently.
In this case, every executive is terrified of being "left out" in the AI race. As we saw with the mass layoffs across companies, most of CEO decision making is just adhering to herd behavior. So it is literally better for execs to have shoveled a shit ton of money into 'strategic' AI initiatives and have them fail than potentially deal with the potentially remote chance of some other exec or company succeeding with 'AI enabled transformation'.
What makes it even more fun is that nobody really has a good understanding of how to measure the ROI of AI. Hence we have people burning a lot of money due to FOMO and no easy way of measuring the outcome, which is usually how the foundations for good Ponzi schemes are laid.
This is unlikely to end well. However, as usual, it's us, the common plebs, who will suffer regardless of outcome.
It's actually kinda useful in some cases, but the UI is terrible and it needs to integrate much better with existing tools that are superior to it for specific purposes, before I'll be happy using it. I'd say the productivity gains are a wash, for me, so far. Plus it's entirely too memory-hungry, I'd just come to accept that a text editor takes a couple GB now (SIGH), and here it comes taking way more than that.
OTOH, it’s an attempt to address a real problem. There are people who are in fact falling behind (I’m talking literally editing code in notepad), and we can either let them get PIPped eventually, or try to bring them along. There is a real “activation energy” required to learning new tools, and some people need an excuse/permission. Not saying that token count is a GOOD signal, but I haven’t heard many better ideas
Exactly this: "Jensen Huang says he would be 'deeply alarmed' if his $500,000 engineer did not consume at least $250,000 of tokens" : https://www.businessinsider.com/jensen-huang-500k-engineers-...
Who does this?
suppose it's better than counting lines of code, though.
A friend is a team lead in an org that's mandating vibecoding via "Devin", a lesser known player an "architect" chose after shallow review. The company also has endemic process issues and simply can't do deployments reliably, it's behind the times in methodology in every other respect. Higher ups are placing their trust in a B-list agentic tool instead of fixing the problems.
Anyway, I wouldn't be caught dead working at either of those two shops even before the AI rollout, but this is what's going on in the IT underworld.
Seems tool vendors are introducing AI for issue resolution. But my sense is that in practice they struggle too with the real-life shitshow. Anyone try any of these systems yet?
[EDIT] Oh and much of your post rings true for my org. They operate at a fraction the speed they could because of organizational dysfunction and failure to use what's already available to them as far as processes and tech, but are rushing toward LLMs, LOL. Yeah, guys, the slowness has nothing to do with how fast code is written, and I'm suuuuure you'll do a great job of integrating those tools effectively when you're failing at the basics....
It was truly quite rare to have such well-honed manual processes though, the "average" place had a lot of elements that were far from perfect but still benefited after the computerization dust had settled. Then at the opposite end of the spectrum were companies where everything was an absolute shitshow, maybe since the beginning.
That's kind of where Conway's Law comes from, if you benchmark against a manual shitshow that has built up over the years, and replace it with a computerized version, you get a shitshow on steroids. The only other choice would have been to spend the appropriate number of years manually undoing the shitshow before making any really bold moves.
Now AI can really take things to a whole 'nother level, not just on steroids but possibly violating Conway's Law . . . squared.
But for those top layers, I’ve never seen so much FOMO in all my life. We’re a very slow moving company but they act like we’ve got 2 weeks to go “AI first” or we’re dead in the water. I’ve never seen such a successful hype cycle. I’m pretty sure it’s the bots that are accelerating it so far behind a normal hype cycle.
Right so you are going to be left behind whilst the ground keeps shifting under you, given the models are non-deterministic and continuously changing?
There was a big rush of prompt engineers. Where are they now? Nobody even referse to 'prompt engineering' anymore.
The best thing to do is wait for steady-state. Whats going on is insane... a slow implosion of the code base.
I'm currently tracking exactly two numeric metrics: total MAUs (to track the aforementioned), and total DAUs (to gauge adoption and rightsize seat-licensed contracts.)
If the benefit is there people will use it or get left behind, there's no sense having a mandate that people resentfully try the new tooling.
Imagine you had a developer who writes Java using vim. It sounds insane but they are just as productive as everyone else. Then you mandate they have to try IntelliJ every quarter, just to see if maybe they like it now. You're just going to piss them off and reduce their productivity by mandating their workflow.
FWIW in the face of these kind of mandates I have been using tokens but ignoring the output. So it's costing my employer money and they have a warped metric of whether the tool is actually useful.
"If the colleges were better, if they really had it, you would need to get the police at the gates to keep order in the inrushing multitude. See in college how we thwart the natural love of learning by leaving the natural method of teaching what each wishes to learn, and insisting that you shall learn what you have no taste or capacity for. The college, which should be a place of delightful labor, is made odious and unhealthy, and the young men are tempted to frivolous amusements to rally their jaded spirits. I would have the studies elective. Scholarship is to be created not by compulsion, but by awakening a pure interest in knowledge. The wise instructor accomplishes this by opening to his pupils precisely the attractions the study has for himself. The marking is a system for schools, not for the college; for boys, not for men; and it is an ungracious work to put on a professor."
-- Ralph Waldo Emerson
If AI makes an employee 10X more productive they get a slight pay raise maybe, but the company makes substantially more money or gets substantially more output. So there is a large difference in incentives.
What you're actually doing here, from my POV, is incentivizing your employer to use more invasive metrics when they tried to stay hands-off and mandate the absolute bare minimum of "uh, give it a shot and see if you think it's useful right now."
The analytics that Claude Enterprise exposes are far more intrusive than I would want to be subjected to as an engineer, so I rolled out a compromise. I don't even track who the active users are, currently.
But maybe you're right, and there are enough people sabotaging the metrics out of spite, that there's a reason they provide the other data.
I hope that the engineers in my org are more mature than that, and would be willing to just say "I'm not currently using it", but thanks for giving me something to think about.
Re: some of them being upset about it- probably. Some people are also upset about being required to use Jira. I personally dislike using Okta.
For example even the layoffs, nowadays seem to be because of AI or so they said but just a year or two ago there were quite some layoffs and people said "its because of the high demand in COVID and now its over, or Ukraine or inflation" but then that ignores that exactly during that time earlier were there many layofs but it was super easy "Oh COVID and supply chain!" and earlier maybe something else.
Surely there are also economic booms but when did the whole world jsut suddenly started seriously listening to public statements of companies (and that jsut a few with no real income, just money from VCs) and nobody shows us the real data of whats actually happening? E.g. the companies saying they fired 10K due to AI, how much did they actually now direct their budget to AI? How many products are actually being build? Is the productivity the same? Are the customers thinking that support is suddenly amazing or actually it has seriously dropepd in quality? Or no change at all? Is it a company like KFC, your local hardwore chain store, financial isntitutions, truck manufactures or anothet AI company with funding using another AIs company with other funding using now one more AIs company products up to the power suppliers?
For me it seems that its definitely impacting things and a cool technology to be more productive (for example it helps me a lot daily but its not like my life really changed) but the other things I haven't seen yet.
Another point each actual AI generated app is either something akin to a toilet game or not really working (like the C compiler). So where are the amazing enterprise complicated apps fully built via agents? In banks, in government, in apps that respect GDPR and actually are secure but proudly build only or mostly with agents? The only ones, not even secure, are other AI apps to do AI stuff but its whole value it says is to be more productive in the "real" economy but it still hasn't done it yet anywhere. People still struggle with Word or AWS infra or debugging why some specific user cant log in with their custom auth provider at some esoteric region with their laws and audits and GDPR variant.
So one side says its basically a tool from God and they never have created more stuff but on the other hand the other group analyzing blood work, delivering food, writing reports, etc uses it a bit or not at all but all the 95% of problems they had are there with some new ones. Also I'm afraid most of them just write now their email better or with more volume, but no real work is getting done.
So yeah maybe my confusion simply lies in that fact that I have a real job and nobody can keep up with all the slop and shit generated online anymore. I'm open to feedback or learn.
Even moving from assembly language to compiled languages was not as much of a step change.