But for an individual cobbler, you basically got fired at one job and hired at another. This may come as a surprise to those who view work as simply an abstract concept that produces value units, but people actually have preferences about how they spend their time. If you're a cobbler, you might enjoy your little workshop, slicing off the edge of leather around the heel, hammering in the pegs, sitting at your workbench.
The nature of the work and your enjoyment of it is a fundamental part of the compensation package of a job.
You might not want to quit that job and get a different job running a shoe assembly line in a factory. Now, if the boss said "hey, since you're all going to be so much more productive working in the factory, we'll give you all 10x raises" then perhaps you might be more excited about putting down your hammer. But the boss isn't saying that. He's saying "all of the cobblers at the other companies are doing this to, so where are you gonna go?".
Of course AI is a top-down mandate. For people who enjoy reading and writing code themselves and find spending their day corralling AI agents to be a less enjoyable job, then the CEO has basically given them a giant benefits cut with zero compensation in return.
I don’t actually think it’ll be a productivity boost the way I work. Code has never been the difficult part, but I’ll definitely have to show I have included AI in my workflow to be left alone.
Oh well…
> Of course AI is a top-down mandate. For people who enjoy reading and writing code themselves and find spending their day corralling AI agents to be a less enjoyable job, then the CEO has basically given them a giant benefits cut with zero compensation in return.
This should be a disclaimer every time someone at work forces you to use AI.
Interesting how when WFH became "the norm" during COVID, there were thousands of apologists arguing that employees suddenly received perks for nothing. Where are all of you now? Why aren't you arguing against employer doing a fucking rug pull?
I wouldn't analogize the adoption of AI tools to a transition from individual craftspeople to an assembly line, which is a top-down total reorganization of the company (akin to the transition of a factory from steam power to electricity, as a sibling commenter noted [0]). As it currently exists, AI adoption is a bottom-up decision at the individual level, not a total corporate reorganization. Continuing your analogy, it's more akin to letting craftspeople bring whatever tools they want to work, whether those be hand tools or power tools. If the power tools are any good, most will naturally opt for them because they make the job easier.
>The nature of the work and your enjoyment of it is a fundamental part of the compensation package of a job.
That's certainly a part of it, but I also think workers enjoy and strive to be productive. Why else would they naturally adopt things like compilers, IDEs, and frameworks? Many workers enjoyed the respective intellectual puzzles of hand-optimizing assembly, or memorizing esoteric key combinations in their tricked-out text editors, or implementing everything from scratch, yet nonetheless jumped at the opportunity to adopt modern tooling because it increased how much they could accomplish.
... is now the moment to form worker cooperatives? The companies don't really have privileged access to these tools, and unlike many other things that drive increased productivity, there's not a huge up-front capital investment for the adopter. Why shouldn't ICs capture the value of their increased output?
We are probably on a similar trajectory.
All the tools that improved productivity for software devs (Docker, K8S/ECS/autoscaling, Telemetry providers) took very long for management to realize they bring value, and in some places with a lot of resistance. Some places where I worked, asking for an IntelliJ license would make your manager look at you like you were asking "hey can I bang your wife?".
If anything, the problem is that management wants to automate poorly. The employees are asked to "figure it out", and if they give feedback that it's probably not the best option, that feedback is rejected.
https://writingball.blogspot.com/2020/02/the-infamous-apple-...
1. All teams will henceforth expose their data and functionality through service interfaces.
2. Teams must communicate with each other through these interfaces.
3. There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team’s data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.
4. It doesn’t matter what technology they use. HTTP, Corba, Pubsub, custom protocols — doesn’t matter.
5. All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.
6. Anyone who doesn’t do this will be fired.
This is very different from saying "any developer who doesn't use an IDE and a debugger will be fired," which is analogous to what the AI mandates are prescribing.[0] https://nordicapis.com/the-bezos-api-mandate-amazons-manifes...
Any company issuing such an edict early on would have bankrupted themselves. And by the time it became practical, no such edict was needed.
In most companies, you can't just pick up random new tools (especially ones that send data to third parties). The telling part is giving internal safety to use these tools.
This is simply not true. As a counter example consider debuggers. They are a big productivity boost, but it requires the user to change their development practice and learn a new tool. This makes adoption very hard. AI has a similar issue of being a new tool with a learning curve.
I would have just thought that people using them would quickly outpace the people that weren't and the people falling behind would adapt or die.
Tech loves making something a top priority (and forgetting about it several years later); AI is the first one that is applicable to the masses.
.. Well maybe not User-first. But that was even less clear than AI-first.
[1] there was a remote universe where I could see myself working for Shopify, now that company is sitting somewhere between Wipro and Accenture in my ranking.
It might be that these companies don't care about actual performance or it might be that these companies are too cheap/poorly run to reward/incentivize actual performance gains but either way... the fault is on leadership.
Then concludes his email with:
> I have asked Shelly to free up time on my calendar next week so people can have conversations with me about our future.
I assume Shelly is an AI, and not human headcount the CEO is wasting on menial admin tasks??
A friend of mine is an engineer of a large pre-IPO startup, and their VP of AI just demanded every single employee needs to create an agent using Claude. There were 9700 created in a month or so. Imagine the amount of tech debt, security holes, and business logic mistakes this orgy of agents will cause and will have to be fixed in the future.
edit: typo
People with roles nowhere near software/tech/data are being asked about their AI usage in their self-assessment/annual review process, etc.
It's deeply fascinating psychologically and I'm not sure where this ends.
I've never seen any tech theme pushed top down so hard in 20+ years working. The closest was the early 00s offshoring boom before it peaked and was rationalized/rolled back to some degree. The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
> The common theme is C-suite thinks it will save money and their competitors already figured out out, so they are FOMOing at the mouth about catching up on the savings.
I concur 100%. This is a monkey-see-monkey-do FOMO mania, and it's driven by the C-suite, not rank-and-file. I've never seen anything like it.
Other sticky "productivity movements" - or, if you're less generous like me, fads - at the level of the individual and the team, for example agile development methodologies or object oriented programming or test driven development, have generally been invented and promoted by the rank and file or by middle management. They may or may not have had some level of industry astroturfing to them (see: agile), but to me the crucial difference is that they were mostly pushed by a vanguard of practitioners who were at most one level removed from the coal face.
Now, this is not to say there aren't developers and non-developer workers out there using this stuff with great effectiveness and singing its praises. That _is_ happening. But they're not at the leading edge of it mandating company-wide adoption.
What we are seeing now is, to a first approximation, the result of herd behavior at the C-level. It should be incredibly concerning to all of us that such a small group of lemming-like people should have such an enormously outsized role in both allocating capital and running our lives.
This is a great line - evocative, funny, and a bit o wordplay.
I think you might be right about the behavior here; I haven't been able to otherwise understand the absolute forcing through of "use AI!!" by people and upon people with only a hazy notion of why and how. I suppose it's some version of nuclear deterrence or Pascal's wager -- if AI isn't a magic bullet then no big loss but if it is they can't afford not to be the first one to fire it.
Or install a landline (over 5G because that's how you do it nowadays) and call it a day. :-)
2. most ai adoption is personal. people use whichever tools work for their role (cc / codex / cursor / copilot (jk, nobody should be using copilot)
3. there is some subset of ai detractors that refuse to use the tools for whatever reason
the metrics pushed by 1) rarely account for 2) and dont really serve 3)
i work at one of the 'hot' ai companies and there is no mandate to use ai... everyone is trusted to use whichever tools they pick responsibly which is how it should be imo
If you can’t state what a thing is supposed to deliver (and how it will be measured) you don’t have a strategy, only a bunch of activity.
For some reason the last decade or so we have confused activity with productivity.
(and words/claims with company value - but that's another topic)
I seem to be using claude (sonnet/opus/haiku, not cc though), and have the option of using codex via my copilot account. Is there some advantage to using codex/claude more directly/not through copilot?
I'm at the forefront of agentic tooling use, but also know that I'm working in uncharted territory. I have the skills to use it safely and securely, but not everyone does.
Demanding everyone, from drywaller to admin assistant go out and buy a purple colored drill, never use any other colored drill, and use their purple drill for at least fifty minutes a day (to be confirmed by measuring battery charge).
Each department head needs to incorporate into their annual business plan how they are going to use a drill as part of their job in accounting/administration/mailroom.
Throughout the year, must coordinate training & enforce attendance for the people in their department with drill training mandated by the Head of Drilling.
And then they must comply with and meet drilling utilization metrics in order to meet their annual goals.
Drilling cannot be fail, it can only be failed.
I’d just add a cron job to burn some tokens.
Enforced use means one of two things:
1. The tool sucks, so few will use it unless forced.
2. Use of the tool is against your interests as a worker, so you must be coerced to fuck yourself over (unless you're a software engineer, in which case you may excitedly agree to fuck yourself over willingly, because you're not as smart as you think you are).
This is just another business fad, but because the execs want to seem to be cool and seem to be doing what their "peers" claim to be doing, well, then by gosh, all of the workers have to do the same fad.
I am aware of a large company that everyone in the US has heard of, planning on laying off 30% of their devs shortly because they expect a 30% improvement in "productivity" from the remaining dev team.
Exciting indeed. Imagine all the divorces that will fall out of this! Hopefully the kids will be ok, daddy just had an accident, he won't be coming home.
If you think anything that is happening with the amount of money and bullshit enveloping this LLM disaster, you should put the keyboard down for a while.
Another time I asked it to rename a struct field across a the whole codebase. It missed 2 instances. A simple sed & grep command would've taken me 15 seconds to write and do the job correctly and cost $~0.00 compute, but I was curious to see if the AI could do it. Nope.
Trillions of dollars for this? Sigh... try again next week, I guess.
>The misconceptions about Klarna and AI adoption baffle me sometimes.
>Yes, we removed close to 1,500 micro SaaS services and some large. Not to save on licenses, but to give AI the cleanest possible context.
If you remove all your services...
[Company that's getting disrupted by AI: Fiverr, Duolingo]: rush to adopt internal AI to cut costs before they get undercut by competition
[Company that's orthogonal: Box, Ramp, HFT]: build internal tools to boost productivity, maintain 'ai-first' image to keep talent
[Company whose business model is AI]: time to go all in
ai-first via the actual technology being built: talent magnet despite the HN bubble appearing otherwise
Relevant article from two days ago https://www.latent.space/p/adversarial-reasoning
happy to be corrected but im not aware of any direct improvements llms bring to ultra low latency market making, time to first token is just too high (not including coding agents)
from talking to some friends in the space theres some meaningful improvements in tooling especially in discretionary trading that operate on longer time horizons where agents can actually help w research and sentiment analysis
Now the stock is down from $800+ to $200+ and the whole messaging has changed. The last one I saw on LinkedIn was "" No comment on the HubSpot stock price.
But, I strongly agree with this statement:
"...I don't see companies trusting their revenue engine to something vibe-coded over a weekend." ""
The stock dip is likely because of the true AI native CRMs being built and coming to market, but why couldn't HubSpot take that spot given the CTOs interest in the space.
"X trackers and content blocked
Your Firefox settings blocked this content from tracking you across sites or being used for ads."
Screenshots don't track me so they would be ok.
Also notice how almost all the stocks of these companies except Meta who have announced AI-first initiatives are at best flat or down but more than 20% YTD.
What does that tell you?
And yes, people did resist IDEs (“I’m best with my eMacs” - no you weren’t), people resisted the “sufficiently smart compiler”, and so on. What happened was that they were replaced by the sheer growth in the industry providing new people who didn’t have these constraints.
What are the trenches in businesses in 2030, purely ownership over physical assets and energy?
That may be all the publicly-posted ones, but I'm skeptical. They have 11.
There were a lot more internal memos.