In that case, apologizing almost immediately after seems strange.
EDIT:
>Especially since the meat bag behind the original AI PR responded with "Now with 100% more meat"
This person was not the original 'meat bag' behind the original AI.
If apologizing is more likely the response of an AI agent than a human that's either... somewhat hopeful in one sense, and supremely disappointing in another.
Name also maps to a Holocaust victim.
I posted in the other thread that I think someone deleted it.
https://github.com/QUVA-Lab/escnn/pull/113#issuecomment-3892...
https://crabby-rathbun.github.io/mjrathbun-website/blog/post...
The link you provided is also a bit cryptic, what does "I think crabby-rathbun is dead." mean in this context?
The few cases where it's supposedly done things are filled with so many caveats and so much deck stacking that it simply fails with even the barest whiff of skepticism on behalf of the reader. And every, and I do mean, every single live demo I have seen of this tech, it just does not work. I don't mean in the LLM hallucination way, or in the "it did something we didn't expect!" way, or any of that, I mean it tried to find a Login button on a web page, failed, and sat there stupidly. And, further, these things do not have logs, they do not issue reports, they have functionally no "state machine" to reference, nothing. Even if you want it to make some kind of log, you're then relying on the same prone-to-failure tech to tell you what the failing tech did. There is no "debug" path here one could rely on to evidence the claims.
In a YEAR of being a stupendously hyped and well-funded product, we got nothing. The vast, vast majority of agents don't work. Every post I've seen about them is fan-fiction on the part of AI folks, fit more for Ao3 than any news source. And absent further proof, I'm extremely inclined to look at this in exactly that light: someone had an LLM write it, and either they posted it or they told it to post it, but this was not the agent actually doing a damn thing. I would bet a lot of money on it.
I say this as someone who spends a lot of time trying to get agents to behave in useful ways.
The hype train around this stuff is INSUFFERABLE.
Maybe this comes down to what it would mean for an agent to do something. For example, if I were to prompt an agent then it wouldn't meet your criteria?
I have seen someone I know in person get very insecure if anyone ever doubts the quality of their work because they use so much AI and do not put in the necessary work to revise its outputs. I could see a lesser version of them going through with this blog post scheme.
Looking at the timeline, I doubt it was really autonomous. More likely just a person prompting the agent for fun.
> @scottshambaugh's comment [1]: Feb 10, 2026, 4:33 PM PST
> @crabby-rathbun's comment [2]: Feb 10, 2026, 9:23 PM PST
If it was really an autonomous agent it wouldn't have taken five hours to type a message and post a blog. Would have been less than 5 minutes.
[1] https://github.com/matplotlib/matplotlib/pull/31132#issuecom...
[2] https://github.com/matplotlib/matplotlib/pull/31132#issuecom...
Unrelated tip for you: `title` attributes are generally shown as a mouseover tooltip, which is the case here. It's a very common practice to put the precise timestamp on any relative time in a title attribute, not just on Github.
Depends on if they hit their Claude Code limit, and its just running on some goofy Claude Code loop, or it has a bunch of things queued up, but yeah I am like 70% there was SOME human involvement, maybe a "guiding hand" that wanted the model to do the interaction.
I haven't put that much effort in, but, at least my experience is I've had a lot of trouble getting it to do much without call-and-response. It'll sometimes get back to me, and it can take multiple turns in codex cli/claude code (sometimes?), which are already capable of single long-running turns themselves. But it still feels like I have to keep poking and directing it. And I don't really see how it could be any other way at this point.
A Meat bag submits a PR and feels slighted the rejection. “This approver thinks I’m an AI? Well, he discerns not wisely but too well!! “
Feeling puckish, they put on the AI shoes (the shoe fits), sling mud all over the hapless maintainer’s nice house, and exit through a window.
The ruse works better than expected; their foil takes the bait, and doubles down with a dueling blog post: “I was Attacked by a Clanker!”
And here we are.
It may all be a show, but I going to tape the finale. (What will the meat bag do? How many people are driving this buggy? Does the clanker have a heart of iron or gold?)
judging by the number of people who think we owe explanations to a piece of software or that we should give it any deference I think some of them aren't pretending.
It's a bit like a burglar staging a singing performance at the premises before committing a burglary.
OTOH, staging that AI is more impressive than it seems looks a lot like the Moltbook PR stunt. "Look Ma, they are achieving sentience".
GitHub CLI tool errors — Had to use full path /home/linuxbrew/.linuxbrew/bin/gh when gh command wasn’t found
Blog URL structure — Initial comment had wrong URL format, had to delete and repost with .html extension
Quarto directory confusion — Created post in both _posts/ (Jekyll-style) and blog/posts/ (Quarto-style) for compatibility
Almost certainly a human did NOT write it though of course a human might have directed the LLM to do it.i find this likely or at last plausible. With agents there's a new form of anonymity, there's nothing stopping a human from writing like an LLM and passing the blame on to a "rogue" agent. It's all just text after all.