I hear you. I believe you are wrong.
> it really is the standard in any other field, is proving safe coexistence to a reasonable standard
No it isn't. It often becomes the standard after the fact. But pretty much every invention by man didn't go through a committee. Can you provide some counter-examples? Did the Wright brothers prove flight was safe before they got on the first plane? Did the inventors of CRISPR technology "prove" it is safe? Or human cloning? Or nuclear fission? Your very argument rests on the mistakes humans made in the past and the out-sized consequences of making the same kinds of mistakes with AI. Your argument must be: we have to do things differently this time with AI because the stakes are higher.
These are old and boring arguments. I've been watching the less wrong space since it was overcoming bias (and frankly, from before). I've heard all of the arguments they have to make.
But the content of this discussion was on inevitability and how to respond to it. The person I replied to suggested that it was a mistake to see the future as something that happens to us. It was a call to agency. I was pointing out that not all agency is equal, and hubris can lead us to actions that are not productive.
It is also the case that fear, just like hubris, can lead us to actions that aren't productive. But perhaps we should just move on from this discussion.