That assumes the threat isn't complete annihilation of humanity, which is what's being claimed. That assumption is the weak link, and is what should be attacked.
Again, if we assume that AI poses an existential risk (and to be clear, I don't think it does), then it follows that we should regulate it analogously to the way in which we regulate weapons-grade plutonium.