Also:
> some people will want to work the way she spells out, especially earlier in their career
If you're going to be insulting by implying that only newbies should be cautious about AI preventing them from learning, be explicit about it.
I disagree with you that incident responders learn best by e.g. groveling through OpenSearch clusters themselves. In fact, I think the opposite thing is true: LLM agents do interesting things that humans don't think to do, and also can put more hypotheses on the table for incident responders to consider, faster, rather than the ordinary process of rabbitholing serially down individual hypothesis, 20-30 minutes at a time, never seeing the forest for the trees.
I think the same thing is probably true of things like "dumping complicated iproute2 routing table configurations" or "inspecting current DNS state". I know it to be the case for LVM2 debugging†!
Note that these are all active investigation steps, that involve the LLM agent actually doing stuff, but none of it is plausibly destructive.
† Albeit tediously, with me shuttling things to and from an LLM rather than an agent doing things; this sucks, but we haven't solved the security issues yet.
Consider, by way of example, the classic problem of teaching someone to find information. If someone asks "how do I X" and you answer "by doing Y", they have learned one thing (and will hopefully retain it). If someone asks "how do I X" and you answer "here's the search I did to find the answer of Y", they have now learned two things, and one of them reinforces a critical skill they should be using throughout their career.
I am not suggesting that incident response should be done entirely by hand, or that there's zero place for AI. AI is somewhat good at, for instance, looking at a huge amount of information at once and pointing towards things that might warrant a closer look. I'm nonetheless agreeing with the point that the human should be in the loop to a large degree.
That also partly addresses the fundamental security problems of letting AI run commands in production, though in practice I do think it likely that people will run commands presented to them without careful checking.
> none of it is plausibly destructive
In theory, you could have a safelist of ways to gather information non-destructively. In practice, it would not surprise me at all if pople don't. I think it's very likely that many people will deploy AI tools in production and not solve any of the security issues, and incidents will result.
I am all for the concept of having a giant dashboard that collects and presents any non-destructive information rapidly. That tool is useful for a human, too. (Along with presenting the commands that were used to obtain that information.)
The author's saying great products don't come from solo devs. Linux? Dropbox? Gmail? Ruby on Rails? Python? The list is literally endless.
But the author then claims that all great products come from committee? I've seen plenty of products die by committee. I've never seen one made by it.
Their initial argument is seriously flawed, and not at all defensible. It doesn't match reality.
Ruby on Rails? Are we talking about the Ruby part (Matz) or the Rails part (DHH)?
Dropbox was founded by Drew Houston and Arash Ferdowsi. The initial Gmail development team had multiple people plus the infrastructure and resources of Google. I'm not sure why people love the lone genius story so much, but it's definitely the exception and not the rule.