The LessWrong-style AI risk is "AI becomes so superhuman that it is indistinguishable from God and decides to destroy all humans and we are completely powerless against its quasi-divine capabilities."
With the side-note that, historically, humans have found themselves unable to distinguish a lot of things from God, e.g. thunderclouds — and, more recently, toast.