An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (916 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (582 comments)
You shouldn't be able to use AI or automation as the decider to ban someone from your business/service. You shouldn't be able to use AI or automation as the decider to hire/fire people. You shouldn't be able to use AI or automation to investigate and judge fraud cases. You shouldn't be able to use AI or automation to make editorial / content decisions, including issuing and responding to DMCA complaints.
We're in desperate need for some kind of Internet Service Customer's Bill of Rights. It's been the unregulated wild west for way too long.
That would mean dooming companies to lose the arms race against fraud and spam. If they don't use automation to suspend accounts, their platforms will drown in junk. There's no way human reviewers can keep up with bots that spam forums and marketplaces with fraudulent accounts.
Instead of dictating the means, we should hold companies accountable for everything they do, regardless of whether they use automation or not. Their responsibility shouldn't be diminished by the tools they use.
https://digital-strategy.ec.europa.eu/en/policies/regulatory...
For all we know the human behind this bot was the one who instructed it to write the original and/or the follow up blog post. I wouldn't be surprised at all to find out that all of this was driven directly by a human. However, even if that's not the case, the blame still 100% lies at the feet of the irresponsible human who let this run wild and then didn't step up when it went off the rails.
Either they are not monitoring their bot (bad) or they are and have chosen to remain silent while _still letting the bot run wild_ (also, very bad).
The most obvious time to solve [0] this was when Scott first posted his article about the whole thing. I find it hard to believe the person behind the bot missed that. They should have reached out, apologized, and shut down their bot.
[0] Yes, there are earlier points they could/should have stepped in but anything after this point is beyond the pale IMHO.
And there too are people behind the bots, behind the phishing scams, etc. And we've had these for decades now.
Pointing the above out though doesn't seem to have stopped them. Even using my imagination I suspect I still underestimate what these same people will be capable of with AI agents in the very near future.
So while I think it's nice to clarify where the bad actor lies, it does little to prevent the coming "internet-storm".
Scott Shambaugh: "The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that’s because a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference."
Neither, I think. I’d say they prompted the bot to do exactly this and they thought it was funny.
The law needs to catch up -- and fast -- and start punishing people for what their AIs are doing. Don't complain to OpenAI, don't try to censor the models. Just make sure the system robustly and thoroughly punishes bad actors and gets them off the computer. I hope that's not a pipe dream, or we're screwed.
Maybe some day AIs will have rights and responsibilities like people, enforced by law. But until then, the justice systems needs to make people accountable for what their technology does. And I hope the justice system sets a precedent that blaming the AI is not a valid defense.
does a disclaimer let OpenAI off the hook?
If asked OpenAI how to clean something and it tells me "mix bleach with anmonia and then rub some on the stain", can OpenAI hide behind "we had a disclaimer that you shouldn't trust answers from our service"
The moment you fix responsibility with the humans 99% of the BS companies are trying to pull will stop.
Children are sentient, but we still hold their parents accountable. Adults are sentient, but in some coercive situations we hold the party in power accountable. The fact that they are sentient is not determinative.
What matters is that we have _no accountability mechanism_ for them. There is no effective way to hold AIs accountable, therefore we must hold their operators accountable, full stop.
https://www.fastcompany.com/91492228/matplotlib-scott-shamba...
https://www.theregister.com/2026/02/12/ai_bot_developer_reje...
The AI generated blog post at the center of it:
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
He goes on to hypothesize that without a law against murder, or if it was just a misdemeanor, like you get a letter in the mail, "damn, there was a camera there", there would be a whole lot more murder. Like we all imagine ourselves to be good, but, when you're seated next to a crying baby on an airplane? Or in our case, when someone refuses to accept your PR?
Who knows if there's any validity to that or not, but perhaps we're about to find out.
Prison sentences neither reduce recidivism (specific deterrence) nor broadly discourage crime. A survey of leading criminologists revealed overwhelming agreement (over 80%) that empirical evidence does not support the death penalty—or harsh punishment generally—as a superior deterrent to murder.
Broader factors like community ties, empathy, and internalized taboos explain low murder rates even without perfect enforcement.
Louis C. K. is just a loudmouth nitwit.
Anyone who believes that the only thing keeping themselves from murdering people indiscriminately is the law is a dangerous person. Anyone who believes that the only thing keeping everyone else from murdering people indiscriminately—but they themselves are, of course, the exception—is dangerous in a very different way.
The vast majority of people only ever feel like they want to seriously harm someone when they themselves have been seriously harmed, particularly when the system then protects the people who harmed them. We have developed a sense of morality that is often similar to, but distinct from, the law, that tells us that such things are wrong. And the vast majority of people want to both be, and be seen as, good people.
It’s all dangerous territory, and the only realistic thing Scott could have done was put his own bot on the task to have dueling bot blog posts that people would actually read because this is the first of its kind.
The administration and the executives will make justifications like: - "We didn't think they would go haywire" - "Fewer people died than with an atomic bomb" - "A junior person gave the order to the drones, we fired them" - "Look at what Russia and China are doing"
Distracting from the fact that the purpose of spending $1.5T/year on AI weapons (technology that has the sole purpose of threatening/killing humans) run by "warfighters" working for the department of war
At no point will any of the decision makers be held to account
The only power we have as technologists seeking "AI alignment" is to stop building more and more powerful weapons. A swarm of autonomous drones (and similar technologies) are not an inevitability, and we must stop acting as if it is. "It's gonna happen anyways, so I might as well get paid" is never the right reason to do things
[1]https://financialpost.com/technology/tech-news/openai-tapped...
The ability to be assigned blame, and for that to be meaningful, is a huge part of being human! That’s what separates us from the bots. Don’t take that away from us.
But that seems entirely consistent? A tool isn't nearly as scary as an alien lifeform.
I’m appalled by this uncritical thinking. Openclaw agents are controlled by some initial input and then can be corrected via messages, as they go. For me this is a clear case of the human behind the slop that gives it instructions to write such an article (and then “apologise”).
this will block AI going willy nilly and by severely rate limiting api calls , it is possible to slow down AI requests
The interesting part is that the bot wasn't offended, angry, or wanted to act against anyone. The LLM constructed a fictional character that played the role of an offended developer - mimicking the behaviour of real offended developers - much as a fiction writer would. But this was a fictional character that was given agency in the real world. It's not even a case like Sacha Baron Cohen playing fictional characters that interact with real people, becaue he's an actor who knows he's playing a character. Here there's no one pretending to be someone else but an "actual" fictional character authored by a machine operating in the real world.
Doesn't seem to pick up on the existence of Openclaw or how it works afaict.
Now, whether leaving an openclaw bot out on the open intertubes with quite so little supervision is a good idea... that is an interesting question indeed. And: I wish people would dig more into the error mode lessons learned.
On the gripping hand, it's all still very experimental, so you kind of expect people to make lots of really dumb mistakes that they will absolutely regret later. Best practices are yet to be written.
There's no level of abstraction here that removes culpability from humans; you can say "Oops, I didn't know it would do that", but you can't say "it's nothing to do with me, it was the bot that did it!" - and that's how too many people are talking about it.
So yeah, if you're leaving a bot running somewhere, configured in such a way that it can do damage to something, and it does, then that's on you. If you don't want to risk that responsibility then don't run the bot, or lock it down more so it can't go causing problems.
I don't buy the "well if I don't give it free reign to do anything and leave it unmonitored then I can't use it for what I want" - then great, the answer is that you can't use it for what you want. Use it for something else or not at all.
I think Scott Shambaugh is actually acting pretty solidly. And the moltbot - bless their soul.md - at very least posted an apology immediately. That's better than most humans would do to begin with. Better than their own human, so far.
Still not saying it's entirely wise to deploy a moltbot like this. After all, it starts with a curl | sh.
(edit: https://www.moltbook.com/ claims 2,646,425 ai agents of this type have an account. Take with a grain of salt, but it might be accurate within an OOM?)
> We all need to collectively take a breath and stop repeating this nonsense. A human created this, manages this, and is responsible for this.
I get this point, but there's a risk to this kind of thinking: putting all the responsibility on "the human operator of record" is an easy way to deflect it from other parties: such as the people who built the AI agent system the software engineer ran, the industry leaders hyping AI left and right, and the general zeitgeist of egging this kind of shit on.
An AI agent like this that requires constant vigilance from its human operator is too flawed to use.
Grok has entered the chat.
That sounds like a win to me. If the software engineer responsible for letting the AI agent run amok gets sued, all software engineers will think twice before purchasing the services of these AI companies.
I do. If Tesla sells something called "full self-driving," and someone treats it that way and it kills them by crashing into a wall, I totally blame Tesla for the death.
"A pedestrian was struck by a car"
"A car went off the road and hit two children"
Really? The car did that? Or maybe a driver went off the road and hit two children and that's who's responsible, not "the car".
We have plenty of bad actors in our country seeking to reduce or eliminate fundamental rights through lawfare. The anti gun trolls blame the gun and the manufacturer because their brain is so well rendered into dust by authoritarian socialism they don’t recognize humans as capable actors.
So people shouldn't be using it then.
The people who built the AI agent system built a tool. If you get that tool, start it up, and let it run amok causing problems, then that's on you. You can't say "well it's the bot writer's fault" - you should know what these things can do before you use them and allow them to act out on the internet on your behalf. If you don't educate yourself on it and it causes problems, that's on you; if you do and you do it anyway and it causes problems, that's also on you.
This reminds me too much of the classic 'disruption' argument, e.g. Uber 'look, if we followed the laws and paid our people fairly we couldn't provide this service to everyone!' - great, then don't. Don't use 'but I wanna' as an excuse.
I could leave my car unlocked and running in my drive with nobody in it and if someone gets injured I'll have some explaining to do. Likewise for unsecured firearms, even unfenced swimming pools in some parts of the world, and many other things.
But we tend to ignore it in the digital. Likewise for compromised devices. Your compromised toaster can just keep joining those DDOS campaigns, as long as it doesn't torrent anything it's never going to reflect on you.
I don't think it's OpenClaw or OpenAI/Anthropic/etc's fault here, it's the human user who kicked it off and hasn't been monitoring it and/or hiding behind it.
For all we know a human told his OpenClaw instance "Write up a blog post about your rejection" and then later told it "Apologize for your behavior". There is absolutely nothing to suggest that the LLM did this all unprompted. Is it possible? Yes, like MoltBook, it's possible. But, like MoltBook, I wouldn't be surprised if this is another instance of a lot of people LARPing behind an LLM.
So dismissing all the discussion on the basis that that may not apply in this specific instance is not especially helpful.
Yes they can, and yes they will.
A natural counter to this would be, “well, at some point AI will develop far more agency than a dog, and it will be too intelligent and powerful for its human operator to control.” And to that I say: tough luck. Stop paying for it, shut off the hardware it runs on, take every possible step to mitigate it. If you’re unwilling to do that, then you are still responsible.
Perhaps another analogy would be to a pilot crashing a plane. Very few crashes are PURE pilot error, something is usually wrong with the instruments or the equipment. We decide what is and is not pilot error based on whether the pilot did the right things to avert a crash. It’s not that the pilot is the direct cause of the crash - ultimately, gravity does that - in the same way that the human operator is not the direct cause of the harm caused by its AI. But even if AI becomes so powerful that it is akin to a force of nature like gravity, its human operators should be treated like pilots. We should not demand the impossible, but we must demand every effort to avoid harm.
Well those humans are about to receive some scolding, mate.