> These doomsday scenarios seem to all assume that there is only a single superintelligence, or one whose capabilities are vastly ahead of all its peers
A lot of scenarios posit that this is basically guaranteed to happen the very first time we have an ASI smart enough to bootstrap itself into greater intelligence, unless we're basically perfect in how we align its goals, so yes.
LLMs seem like the best case world for avoiding this scenario (improvement requires exponentially scaling resources compared to inference). That said, it is by no means guaranteed that LLMs will remain the SotA for AI.
> “the best way to achieve my goals would be to remove humans”, it seems unlikely the other superintelligences would let it.
Why not? Seriously, why wouldn't the other superintelligences let it? There's no reason to assume that, by default, ASI would be invested in the survival of the human race in a way we would prefer. More than likely they will just be as laser focussed on their own goals.
The whole point is that it's very difficult to design an AI that has "defending what humans think is right" as a terminal value. They basically all try and find loopholes. The only way to make safety its number one priority is to dial it up so high that everyone complains it's a puritan - and then you get scenarios where it's telling schizophrenics to go off their meds because it's so agreeable.
Unless you're saying that they would fight off the other ASIs gaining power because they themselves want it, but I fail to see how being stuck in a turf war between two ASIs is at all an improvement.