It's made me very sad to see extremely smart people, who I once looked up to and really found very interesting (you're included in here!) fall victim to the irrational fear of AI spurred on by Eliezer. You can create as much literature as you want, and you can create as many hypothetical scenarios as possible, but it's not going to change the fact that dedicating your life into controlling AI so that you can "logically" justify your own life is nonsense.
It's not that I don't get it. I just disagree. I don't think AI will create a science fiction horror show to end all of humanity. I don't think AI will cause humans to stop being humans. I don't think AI will create the end of the world. Yes, I am not 100% sure. There is a 0.01% chance of this all happening. But that doesn't mean that we need to pay attention to it just because multiplying 0.0001 by 1,000,000 yields a big number. I think basing your morality and existence on trolley problems and mental experiments is abhorrent. And no, I can't give you an exact spelled out reason as to why. Do I have to justify basic human morality?
You're an extremely smart person. I love your blog posts, and I love the depth and time you put into them. It's incredible how disappointing it is for me to see you become another "rationalist" obsessed with a fantasy of AI doom.
Edit for your edit: Someone who is wealthy and has nothing to lose quitting his job to join the cult of AI doomerism is, in no way, proof that AI xrisk is meaningful. You're using someone else coming to a conclusion as a reason you are correct. I trust you are smart enough to know why this is awful logic.