What are you building? Does the tool help or hurt?
People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.
After five or six cycles it does become a bit fatiguing. Use the tool sanely. Work at a pace where your understanding of what you are building does not exceed the reality of the mess you and your team are actually building if budgets allow.
This seldom happens, even in solo hobby projects once you cost everything in.
It's not about agile or waterfall or "functional" or abstracting your dependencies via Podman or Docker or VMware or whatever that nix crap is. Or using an agent to catch the bugs in the agent that's talking to an LLM you have next to no control over that's deleting your production database while you slept, then asking it to make illustrations for the postmortem blog post you ask it to write that you think elevates your status in the community but probably doesn't.
I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
This x1000. The last 10 years in the software industry in particular seems full of meta-work. New frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. Ultimately so we can build... what exactly? Are these necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
Hard to shake the feeling that this looks like one big pyramid scheme. I strongly suspect that vast majority of the "innovation" in recent years has gone straight to supporting the funding model and institution of the software profession, rather than actual software engineering.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It was, and is. But not universally.
If you formulate questions scientifically and use the answers to make decisions, that's engineering. I've seen it happen. It can happen with LLMs, under the proper guidance.
If you formulate questions based on vibes, ignore the answers, and do what the CEO says anyway, that's not engineering. Sadly, I've seen this happen far too often. And with this mindset comes the Claudiot mindset - information is ultimately useless so fake autogenerated content is just as valuable as real work.
[1] https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...
If I engineer a bridge I know the load the bridge is designed to carry. Then I add a factor of safety. When I build a website can anyone on the product side actually predict traffic?
When building a bridge I can consult a book of materials and understand how much a material deforms under load, what is breaking point is, it’s expected lifespan, etc. Does this exist for servers, web frameworks, network load balancers, etc.?
I actually believe that software “could” be an engineering discipline but we have a long way to go
- Edsger Dijkstra, 1988
I think, unfortunately, he may have had us all dead to rights on this one.
It is though. Picking the right approaches and tools makes more difference than anything else. Sure, you don't need the right tools if you can make the right choices - but it's much easier to pick a better methodology than to hire smarter people.
At some point I became so burnt out I couldn't look at an IDE or coloured text for that matter.
I found the way back by just changing my motto and focus... Find good people, do good work. That's it, that's all I want.
I don't care whether the 'property is hot' or what the market is doing anymore, I just build software in my lane, with good people around.
Personally I think that whole Karpathy thing is the slowest thing in the world. I mean you can spin the wheels on a dragster all you like and it is really loud and you can smell the fumes but at some point you realize you're not going anywhere.
My own frustration with the general slowness of computing (iOS 26, file pickers, build systems, build systems, build systems, ...) has been peaking lately and frankly the lack of responsiveness is driving me up the wall. If I wasn't busy at work and loaded with a few years worth of side projects I'd be tearing the whole GUI stack down to the bottom and rebuilding it all to respect hard real time requirements.
> People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.
I'm assuming you're saying these tools hurt more than help?
In that case I disagree so much that I'm struggling to reply. It's like trying to convince someone that the Earth is not flat, to my mental model.
PHP, Ruby and VB have more successful code written in them than all current academic or disproportionately hyped languages will ever have combined.
And there's STILL software being written in them. I did Visual Basic consulting for a greenfield project last week despite my current expertise being more with Go, Python, C# and C. And there's a RoR work lined up next. So the presence gap between these helpful tools and other minor, but over index tools, is still increasing.
It's easy to think that the languages one see mor often in HN are the prevalent ones but they are just the tip of the iceberg.
RoR is no longer at its peak, but is still have its marginal stable share of the web, while PHP gets the lion part[1]
Ok, Lotus Notes is really relic from an other era now. But it’s not a PL, so not the same kind of beast.
Well, also LLMs are different beast compared to PL. They actually really are the things that evocate the most the expression "taming the beast" when you need to deal with them. So it indeed as far away as possible of engineering as one can probably use a computer to build any automation. Maybe to stay in scientific realms ethology would be a better starting point than a background in informatics/CS to handle these stuffs.
I'm watching a team which is producing insane amounts of code for their team size, but the level of thought that has gone into all of the details that would make their product a fit predator to run at scale and solve the underlying business problem has been neglected.
Moving really fast in the wrong direction is no help to anyone.
1. Applied physics - Software is immediately disqualified. Symbols have no physics.
2. Ethics - Lives and livelihoods depend on you getting it right. Software people want to be disqualified because that stuff is so boring, but this is becoming a more serious issue with every passing day.
That's increasingly not possible. This is the first time for me in 20 years where I've had a programming tool rammed down my throat.
There's a crisis of software developer autonomy and it's actually hurting software productivity. We're making worse software, slower because the C levels have bought this fairy tale that you can replace 5 development resource with 1 development resource + some tokens.
Other places were "hack it until we don't know of any major bugs, then ship it before someone finds one". And now they're "hey, AI agents - we can use that as a hack-o-matic!" But they were having trouble with sustainability before, and they're going to still, except much faster.
The ones who got acquired - never really had to stand up to any due diligence scrutiny on the technical side. Other sides of the businesses did for sure, but not that side.
Many of you here work for "real" tech companies with the budget and proper skin in the game to actually have real engineers and sane practices. But many of you do not, and I am sure many have seen what I have seen and can attest to this. If someone like the person I mentioned above asks you to join them to help fix their problems, make sure the compensation is tremendous. Slop clean-up is a real profession, but beware.
It's a craft.
Just another reason we should cut software jobs and replace them with A(G)I.
If the human "engineers" were never doing anything precisely, why would the robot engineers need to?
I was interested in making a semi-automous skill improvement program for open code, and I wired up systemd to watch my skills directory; when a new skill appeared, it'd run a command prompt to improve it and cohere it to a skill specification.
It was told to make a lock file before making a skill, then remove the lock files. Multiple times it'd ignore that, make the skill, then lock and unlock on the same line. I also wanted to lock the skill from future improvements, but that context overode the skills locking, so instead I used the concept of marking the skills as readonly.
So in reality, agents only exist because of context poisoning and overlap; they're not some magicaly balm to improving the speed of work, or multiplying the effort, they simply prevent context poisoning from what's essentially subprocesses.
Once you realize that, you really have to scale back the reality because not only are they just dumb, they're not integrating any real information about what they're doing.
Software engineering is not real engineering because we do not rigorously engineer software the way "real" engineers engineer real things. <--- YOU ARE HERE
Software engineering is real engineering because we "rigorously" engineer software the way "real" engineers engineer real things.
Edit: quotes imply sarcasm.
Now I barely look at ticket requirements, feed it to an LLM, have it do the work, spend an hour reviewing it, then ship it 3 days later. Plenty of fuck off time, which is time well spent when I know nothing will change anyway. If I'm gonna lose my career to LLMs I may as well enjoy burning shareholder capital. I've optimized my life completely to maximize fuck off time.
At the end of the day they created the environment. It would be criminal to not take advantage of their stupidity.
Aren't you conveniently ignoring the fact that there were people saw through that and didn't go down those routes?
It isn't. Show me the licensing requirements to be a "software engineer." There are none. A 12 year old can call himself a software engineer and there are probably some who have managed to get remote work on major projects.
I do like the tips on how to work with agents for delegation. Let it do boring things. The deterministic things where you know what the result should look like each time.
Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.
Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market.
Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition?
I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better.
We will miss SaaS dearly. I think history is repeating just with DVD and streaming - we simply bought the same movie twice.
AI more and more feels the same. Half a year ago Claude Opus was Anthropics most expensive model - boy, using Claude Opus 4.6 in the 500k version is like paying 1 dollar per minute now. My once decent budgets get hit not after weeks but days (!) now.
And I am not using agents, subagents which would only multiply the costs - for what?
So what we arrive more and more is the same as always: low, medium, luxury tier. A boring service with different quality and payment structures.
Proof: you cannot compensate with prompt engineering anymore. Month ago you fixed any model discrepancies by being more clever and elaborate with your prompts etc.
Not anymore. There is a hidden factor now that accounts for exactly that. It seems that the reliance on skills and different tiers simply moves us away from prompt engineering which is considered more and more jailbreaking than guidance.
Prompt engineering lately became so mundane, I wonder what vendors were really doing by analyzing the usage data. It seems like that vendors tied certain inquiries with certain outcomes modeled by multistep prompting which was reduced internally to certain trigger sentences to create the illusion of having prompted your result while in fact you haven't.
All you did was asking the same result thousands of user did before and the LLM took an statistical approach to deliver the result.
Sometimes the argument lands, very often it doesn't. As you said, a common refrain is, "but prices won't go up, cost to serve is the highest it will ever be." Or, "inference is already massively profitable and will become more so in the future--I read so on a news site."
And that remark, for me, is unfortunately a discussion-ender. I just haven't ever had a productive conversation with somebody about this after they make these remarks. Somebody saying these things has placed their bets already and are about to throw the dice.
The current discourse around "AI", swarms of agents producing mountains of inscrutable spaghetti, is a tell that this is the future the big players are looking for. They want to create a captive market of token tokers who have no hope of untangling the mess they made when tokens were cheap without buying even more at full price.
I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.
Every genre-defining startup seems to go through this same cycle where the naysayers tell us that it's all going to collapse once the investment money runs out. This was definitely true for technologies without use cases (remember the blockchain-all-the-things era?) but it is not true for businesses that have actual users.
Some early players may go bust by chasing market share without a real business plan, like the infamous Webvan grocery delivery service. But even Webvan was directionally correct, with delivery services now a booming business sector.
Uber is another good example. We heard for years that ridesharing was a fad that would go away as soon as the VC money ran out. Instead, Uber became a profitable company and almost nobody noticed because the naysayers moved on to something else.
AI is different because the hardware is always getting faster and cheaper to operate. Even if LLM progress stalled at Opus 4.6 levels today, it would still be very useful and it would get cheaper with each passing year as hardware improved.
> I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices
Comparing compute costs to oil prices is apples to oranges. Oil is a finite resource that comes out of the ground and the technology to extract it doesn't improve much over decades. AI compute gets better and cheaper every year because the technology advances rapidly. GPU servers that were as expensive as cars a few years ago are now deprecated and available for cheap because the new technology is vastly faster. The next generation will be faster still.
If you're mentally comparing this to things like oil, you're not on the right track
No worries there, the huge improvements we see today from GPT and Claude, are at their heart just Reinforcement Learning (CoT, chain of thought and thinking tokens are just one example of many). RL is the cheapest kind of training one can perform, as far as I understand. Please correct me if that's not the case.
In the economy the invisible hand manages to produce everything cheaper and better all the time, but in the digital space the open source invisible hand makes everything completely free.
https://www.goodreads.com/quotes/141645-heard-joke-once-man-...
His blog post on pi is here: https://mariozechner.at/posts/2025-11-30-pi-coding-agent/
One thing about the old days of DOS and original MacOS: you couldn't get away with nearly as much of this. The whole computer would crash hard and need to be rebooted, all unsaved work lost. You also could not easily push out an update or patch --- stuff had to work out of the box.
Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.
Not that I want to go back to DOS but Wordperfect 5.1 was pretty damn rock solid as I recall.
It's not the glut of compute resources, we've already accepted bloat in modern software. The new crutch is treating every device as "always online" paired with mantra of "ship now! push fixes later." Its easier to setup a big complex CI pipeline you push fixes into and it OTA patches the users system. This way you can justify pushing broken unfinished products to beat your competitors doing the same.
The sad truth is that now, because of the ease of pushing your fix to everything while requiring little more from the user than that their machine be more or less permanently connected to a network, even an OS is dealt with as casually as an application or game.
Maybe some people have already reached that point after so much AI coding and are now warning us; they pushed so hard that they understand the limits. But this is the kind of thing you need to experience on your own.
You need to experiment, learn, test the limits, think for yourself, take as many steps back as you need.
As somebody who has been running systems like these for two decades: the software has not changed. What's changed is that before, nobody trusted anything, so a human had to manually do everything. That slowed down the process, which made flaws happen less frequently. But it was all still crap. Just very slow moving crap, with more manual testing and visual validation. Still plenty of failures, but it doesn't feel like it fails a lot of they're spaced far apart on the status page. The "uptime" is time-driven, not bugs-per-lines-of-code driven.
DevOps' purpose is to teach you that you can move quickly without breaking stuff, but it requires a particular way of working, that emphasizes building trust. You can't just ship random stuff 100x faster and assume it will work. This is what the "move fast and break stuff" people learned the hard way years ago.
And breaking stuff isn't inherently bad - if you learn from your mistakes and make the system better afterward. The problem is, that's extra work that people don't want to do. If you don't have an adult in the room forcing people to improve, you get the disasters of the past month. An example: Google SREs give teams error budgets; the SREs are acting as the adult in the room, forcing the team to stop shipping and fix their quality issues.
One way to deal with this in DevOps/Lean/TPS is the Andon cord. Famously a cord introduced at Toyota that allows any assembly worker to stop the production line until a problem is identified and a fix worked on (not just the immediate defect, but the root cause). This is insane to most business people because nobody wants to stop everything to fix one problem, they want to quickly patch it up and keep working, or ignore it and fix it later. But as Ford/GM found out, that just leads to a mountain of backlogged problems that makes everything worse. Toyota discovered that if you take the long, painful time to fix it immediately, that has the opposite effect, creating more and more efficiency, better quality, fewer defects, and faster shipping. The difference is cultural.
This is real DevOps. If you want your AI work to be both high quality and fast, I recommend following its suggestions. Keep in mind, none of this is a technical issue; it's a business process isssue.
In the past with smaller services those services did break all the time, but the outage was limited to a much smaller area. Also systems were typically less integrated with each other so one service being down rarely took out everything.
Many years ago, I started working for chip companies. It was like a breath of fresh air. Successful chip companies know the costs (both direct money and opportuity) of a failed tapeout, so the metaphorical equivalent of this cord was there.
Find a bug the morning of tapeout? It will be carefully considered and triaged, and maybe delay tapeout. And, as you point out, the cultural aspect is incredibly important, which means that the messenger won't be shot.
The other, arguably far more important output, is the programmer.
The mental model that you, the programmer, build by writing the program.
And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.
(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)
The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.
See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
A service goes down. He tells the agent to debug it and fix it. The agent pulls some logs from $CLOUDPROVIDER, inspects the logs, produces a fix and then automatically updates a shared document with the postmortem.
This got me thinking that it's very hard to internalize both issue and solution -updating your model of the system involved- because there is not enough friction for you to spend time dealing with the problem (coming up with hypotheses, modifying the code, writing the doc). I thought about my very human limitation of having to write things down in paper so that I can better recall them.
Then I recalled something I read years ago: "Cars have brakes so they can go fast."
Even assuming it is now feasible to produce thousands of lines of quality code, there is a limitation on how much a human can absorb and internalize about the changes introduced to a system. This is why we will need brakes -- so we can go faster.
And at that point, if the autonomous system breaks, realized it’s broken, and fixes itself before you even notice… then do you need to care whether you learn from it? I suppose this could obfuscate some shared root cause that gets worse and worse, but if your system is robust and fault-tolerant _and_ self-heals, then what is there to complain about? Probably plenty, but now you can complain about one higher level of abstraction.
But the rough edges are temporary. Coding agents are becoming superhuman along certain dimensions; the progress is staggering. As Andrej Karpathy put it, anything measurable or legible can be optimized by AI. The gaps will close fast.
The harder question is HCI. How do you expose this kind of intelligence in interfaces that actually align with human values? That's the design problem worth obsessing over.
Product design has a slightly different problem than engineering, because the speed of development is so high we cannot dogfood and play with new product decisions, features. By the time I’ve realized we made a stupid design choice and it doesn’t really work in real world, we already built 4 features on top of it. Everyone makes bad product decisions but it was easy and natural to back out of them.
It’s all about how we utilize these things, if we focus on sheer speed it just doesn’t work. You need own architecture and product decisions. You need to use and test your products with humans (and automate those as regression testing). You need to able to hold all of the product or architecture in your mind and help agents to make the right decisions with all the best practice you’ve learned.
Did I miss something? I haven't used it in a minute, but why is the author claiming that it's "uninstallable malware"?
Minimalist alternative with no hooks or dependencies for the curious: https://github.com/wedow/ticket
This cuts to the problem and is excellent framing. A rogue employee can achieve the same, but probably less quickly, and we've designed systems to help catch them early.
But in many agent-skeptical pieces, I keep seeing this specific sentiment that “agent-written code is not production-ready,” and that just feels… wrong!
It’s just completely insane to me to look at the output of Claude code or Codex with frontier models and say “no, nothing that comes out of this can go straight to prod — I need to review every line.”
Yes, there are still issues, and yes, keeping mental context of your codebase’s architecture is critical, but I’m sorry, it just feels borderline archaic to pretend we’re gonna live in a world where these agents have to have a human poring over every single line they commit.
Does it feel archaic because LLMs are clearly producing output of a quality that doesn't require any review, or because having to review all the code LLMs produce clips the productivity gains we can squeeze out of them?
The answer is that it's very easy for bad code to cause more problems than it solves. This:
> Then one day you turn around and want to add a new feature. But the architecture, which is largely booboos at this point, doesn't allow your army of agents to make the change in a functioning way.
is not a hypothetical, but a common failure mode which routinely happens today to teams who don't think carefully enough about what they're merging. I know a team of a half-dozen people who's been working for years to dig themselves out of that hole; because of bad code they shipped in the past, changes that should have taken a couple hours without agentic support take days or weeks even with agentic support.
It's insane to me that someone can arrive at any other conclusion. LLMs very obviously put out bad code, and you have no idea where it is in their output. So you have to review it all.
For an early startup validating their idea, that prod can take it.
For a platform as a service used by millions, nope.
Fwiw OP isn't an agent skeptic, he wrote one of the most popular agent frameworks.
If there are any common apps which are unhinged please do share your experiences. LinkedIn was never great quality but it's off the charts. Also catching some on Spotify.
I think this is very good take on AI adoption: https://mitchellh.com/writing/my-ai-adoption-journey. I've had tremendous success with roughly following the ideas there.
> The point is: let the agent do the boring stuff, the stuff that won't teach you anything new, or try out different things you'd otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation.
That's partially true. I've also had instances where I could have very well done a simple change by myself, but by running it through an agent first I became aware of complexities I wasn't considering and I gained documentation updates for free.
Oh and the best part, if in three months I'm asked to compile a list of things I did, I can just look at my session history, cross with my development history on my repositories and paint a very good picture of what I've achieved. I can even rebuild the decision process with designing the solution.
It's always a win to run things through an agent.
This is a great point.
I have been avoiding LLM's for awhile now, but realized that I might want to try working on a small PDF book to Markdown conversion project[0]. I like the Claude code because command line. I'm realizing you really need to architect with good very precise language to avoid mistakes.
I didn't try to have a prompt do everything at once. I prompted Claude Code to do the conversion process section by section of the document. That seemed to reduce the mistake the agent would make
[0]: https://www.scottrlarson.com/publications/publication-my-fir...
I use agents all day, every single day. But I also push back, understand what was written, and ensure I read and understand everything I ship.
Does it slow me down? Uh, yup. You bet.
Yes, this article literally advocates for slowing the fuck down, but it also makes the coding agents out to be the problem, but they're not.
Sensible engineers who look AI as another (potentially powerful) tool in the toolbox "aren't forward looking enough". I watched this happen in real time at my previous company, where every discussion about quality was interpreted as slowing down progress, and the only thing that was looked on favorably was the idea of replacing developers with machines - because they are "cheaper and faster".
The logical minds here on HN are less prone to believing in magic and AI fairies, but they are often not the ones setting the rules. And the number of companies being run by people with critical thinking skills is getting smaller by the day.
Yes, humans are accountable for the ultimate output. But so are the people who design and build these automation tools. As the saying goes, the purpose of a system is what it does.
i'm making specific usage pattersn out to be the problem, and explain why those patterns can't work due to the way agents work.
I use Aider on my private computers and Copilot at work. Both feel equally powerful when configured with a decent frontier model. Are they really generations apart? What am I missing?
I think a lot of this is just Typescript developers. I bet if you removed them from the equation most of the problem he's writing about go away. Typescript developers didn't even understand what React was doing without agent, now they are just one-shot prompting features, web apps, clis, desktop apps and spitting it out to the world.
The prime example of this is literally Anthropic. They are pumping out features, apps, clis and EVERY single one of them release broken.
https://gist.github.com/ontouchstart/d43591213e0d3087369298f...
(Note: pi was written by the author of the post.)
Now it is time to read them carefully without AI.
We are all rabbits.
Reminds me of Carson Gross' very thoughtful post on AI also: https://htmx.org/essays/yes-and/
[Y]ou are going to fall into The Sorcerer’s Apprentice Trap, creating systems you don’t understand and can’t control.
I don't agree, but bigger issue to me is many/most companies don't even know what they want or think about what the purpose is. So whereas in past devs coding something gave some throttle or sanity checks, now we'd just throw shit over wall even faster.
I'm seeing some LinkedIn lunatics brag about "my idea to production in an hour" and all I can think is: that is probably a terrible feature. No one I've worked with is that good or visionary where that speed even matters.
It’s time to slow the fuck down!
Companies will face the maintenance and availability consequences of these tools but it may take a while for the feedback loop to close
It’s very hard to say right now what happens at the other side of this change right now.
All these new growing pains are happening in many companies simultaneously and they are happening at elevated speed. While that change is taking place it can be quite disorienting and if you want to take a forward looking view it can be quite unclear of how you should behave.
My gut says something simple is missing that makes all of the difference.
One thought I had was that our problem lives between all the things taking something in and spitting something out. Perhaps 90% of the work writing a "function" should be to formally register it as taking in data type foo 1.54.32 and bar 4.5.2 then returning baz 42.0 The register will then tell you all the things you can make from baz 42.0 and the other data you have. A comment(?) above the function has a checksum that prevents anyone from changing it.
But perhaps the solution is something entirely different. Maybe we just need a good set of opcodes and have abstractions represent small groups of instructions that can be combined into larger groups until you have decent higher languages. With the only difference being that one can read what the abstraction actually does. The compiler can figure lots of things out but it wont do architecture.
I think that's the part where it remains difficult. Someone has to convey clearly what the semantics and side effects of the function are. Consumers have to read and understand it. Failing that, you get breakage.
We have too much code - languages to program machines.
We need a new different language now.
A plan.md, written in what... legalese English? Really? Am I back in 1897? People committing that to vcs, sheesh...
That may be the case where AI leaks into, but not every software developer uses or depends on AI. So not all software has become more brittle.
Personally I try to avoid any contact with software developers using AI. This may not be possible, but I don't want to waste my own time "interacting" with people who aren't really the ones writing code anymore.
Oh they even swore in the title.
Oh and of course it's anti-economics and is probably going to hurt whoever actually follows it.
Three for three. It's not logical it's emotional.
Integration is the key to the agents. Individual usages don't help AI much because it is confined within the domain of that individual.
I'm one of those people and I'm not going to slow down. I want to move on from bullshit jobs.
The only people that fear what is coming are those that lack imagination and think we are going to run out of things to do, or run out of problems to create and solve.
Pull the bandaid off quickly, it hurts less.