The "system" isn't a thing, but more like running apps, some run on servers, other consumer hardware. And the parts that run on consumer hardware will be around even if 99% of the current hyped up ecosystem dies overnight, people won't suddenly stop trying to run these things locally.
I get the general "too many variables" argument, but the idea that humans have no means of stopping any of these apps/systems/algorithms/etc if they get "out of control" (a farce in itself as it's a chat bot) is ridiculous.
It's very interesting to see how badly people want to be living in and being an active participant in a sci-fi flick. I think that's far more concerning than the AI itself.
Yes. Look at how much trouble we have now with distributed denial of service attacks.
Go re-read "Daemon" and "Freedom™", by Daniel Suarez (2006). That AI is dumber than what we have now.
LLM code already runs on millions of servers and other devices, across thousands of racks, hundreds of data centers, distributed across the globe under dozens of different governments, etc. The open source models are globally distributed and impossible to delete. The underlying math is public domain for anyone to read.
The power switch is still king, even if it's millions of power switches versus one.