I don't think we're that far away from that. Just the decision of someone to put an AI in charge of critical infrastructure and defense, or a series of oversights allowing an external AI to take control of it.
Looking at the past year and all the unpredicted conclusions AI came to, self-awareness is probably not needed for an AI to consider humans as an obstacle to achieve some poorly-phrased goal.
The Paperclip maximizer theory [0] comes to mind...