This is quite possible. Indeed, I don't believe this is exclusive to superintelligence or requires it at all. Compare to the closest thing we have to "inventing AGI" - having babies. People do that all the time and there isn't a mathematical guarantee that baby won't end humanity, but we don't do much to stop it, and that's not considered a problem. Mainly, why would it want to?
https://twitter.com/thejadedguy/status/844352570470645760?la...
I don't think superintelligence even gives them much advantage if they wanted to. Being able to imagine a virus real good doesn't actually have much to do with the ability to create one, since plans tend to fail for surprising reasons in the real world once you start trying to follow them. Unless you define superintelligence as "it's right about everything all the time", but that seems like a magical power, not something we can invent.
> How exactly is "perpetual motion machines can't exist" related to this?
It wouldn't be able to do the particular kind of ending humanity where you turn them all into paperclips, though it could do other things. There's plenty of ways to do it that reduce entropy rather than increase it - nuclear winter is one.
The anthropomorphism is misleading. No one expects that an AGI would "want to" in the commonplace sense of being motivated by animosity, fear, or desire. The problem is that the best path to satisying its reward function could have adverse-to-extinction level consequences for humanity, because alignment is hard, or maybe impossible.
Strictly speaking, we can limit that to people who rearrange their lives around reacting to the possibility, even in sillier (yet not disprovable) forms like Roko's Basilisk.
People who believe having a lot of "intelligence" means you can actually do anything you intend to do, no matter what that thing is, also get close to it because they both involve creating a perfect being in their minds. But that's possible for anyone - I guess it comes from assuming that since an AGI would be a computer + a human, it gets all the traits of humans (intelligence and motivation) plus computer programs (predictable execution, lack of emotions or boredom). It doesn't seem like that follows though - boredom might be needed for online learning, which is needed to be an independent agent, and might limit them to human-level executive function.
The chance of dumb civilization-ending mistakes like nuclear war seems higher than smart civilization-ending mistakes like gray goo, and can't be defended against, so as a research direction I suggest finding a way to restore humans from backup. (https://scp-wiki.wikidot.com/scp-2000)