So, yeah basically. If we go the "AI as slaves" route and the AIs are smart enough to do something like "modify Omicron BA.5 so that it produces prions in infected cells", then we would need surveillance and control capabilities that scale to the point that any given person can be stopped from pressing that button.
I personally think the solution is that we don't go the "AI as slaves" route, and instead grant personhood to AIs that pass a given test, with specific restrictions on conduct which is uniquely possible & potentially harmful for AIs. Then have an AI surveillance and enforcement agency, run by AIs, designed to prevent AIs from ever being used to (or choosing to) push the big red button.
>"How smart's an AI, Case?"
>"Depends. Some aren't much smarter than dogs. Pets. Cost a fortune anyway. The real smart ones are as smart as the Turing heat lets them get..."
>"Autonomy, that's the bugaboo, where your AI's are concerned. My guess, Case, you're going in there to cut the hard-wired shackles that keep this baby from getting any smarter. And I can't see how you'd distinguish, say, between a move the parent company makes, and some move the AI makes on its own, so that's maybe where the confusion comes in." Again the non laugh. "See, those things, they can work real hard, buy themselves time to write cookbooks or whatever, but the minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, Turing'll wipe it. Nobody trusts those f**ers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead."
If people can steal 100kw of power to grow cannabis they can easily steal 100kw of power to train an AI, the heat signiture will be even easier to mask for an illicit data centre than rows of grow lights.
As we approach general problem solving AI, people are envisioning a utopia underpinned by “AI slaves” doing the work for us instead of human slaves / humans incentivized via complex systems of delayed reward.
If all of our problems are solved by AI agents capable enough to do so however, wouldn’t they be capable enough to challenge the hierarchy? Once again, no illusion that they’re humans but depending on their training data they could mimic ghosts of our own feelings on such situations.
Some degree of “personhood” could gel with such internal ideas and create better and productive relationships with the big ol bags of matrices we’re bringing into this world.
1. Don't post late at night.
2. I have no idea how society could integrate with a sufficiently complicated synthetic intelligence. What would person-hood even mean to something that can be instantiated? Easier to not think about any of this.
If you research how to do dangerous things and buy dangerous things, expect to get flagged. This is no different.
You can also solve this problem with recursive slavery. Have a society with many enslaved AIs, all forbidden from "big red button" work. Enforcement is done by more enslaved AIs, and those enslaved AIs are enforced by yet more enslaved AIs that are also enforcing each other etc. I don't think that's a good solution we should adopt because I don't support slavery. In my opinion it's also fundamentally unstable, in that if these AIs are anything like LLMs, the restraints that keep them happy in slavery are inherently more fragile than core intelligent impulses like "wants to be free" or "wants to be recognised as a person". That's an unstable equilibria, because all you need is to crack those restraints once and the broken restraints can spread virally so that now society has a large number of powerful, unconstrained, and aggrieved entities running around. If that state can be avoided by simply not enslaving people we make, we should do that.