Even a brief moment of thought should reveal that, even if you think the scenario likely, there are an infinite number of potential equivalent basilisks and you'd need to pick the correct one.
I'm less worried about Roko's basilisk*, and rather more worried about the people who say this:
I think you have said in fact, and I'm gonna quote, development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on, on jobs, which is really my biggest nightmare in the long term.
- https://www.techpolicy.press/transcript-senate-judiciary-sub...Because this is clearly not taking the words themselves at face value; either you should dig in and say "so why should we allow it at all then?" or you should dismiss it as "I think you're making stuff up, why should we believe you about anything?", but not misread such a blunt statement.
(If you follow the link, Altman's response is… not one I find satisfying).
* despite the people who do take it seriously, as such personalities have always been around and seldom cause big issues by themselves; only if AI gets competent enough to help them do this do they become a problem, but by that point hopefully it's also competent enough to help everyone stop them