In the same way, much of AI alignment consists of thinking about hypothetical failure modes of advanced AI systems and how to mitigate them. I think this specific paper is especially useful for understanding the technical background that motivates Eliezer's tweeting: https://arxiv.org/pdf/1906.01820.pdf
It seems to me that you should engage with the substance of your coworker's argument. Reading about something in science fiction doesn't prevent it from happening.