We have no idea what emotions, motivations, behaviors, or goals AIs have, will have, or if they'll have something as of yet unconvinced that's not emotions or motivations, but just alien.
We evolved to self-preserve and breed. Modern AIs evolve to pretend to write human text. It's not clear there is any intention to survive, reproduce, or turn the surface of the earth into a computing substrate.
There's a million different dangers -- and I suspect the real ones are ones we haven't conceived of. Whether they'll materialize or how depends on on how we evolve them, and I expect we can't predict it.
To me, much more likely than earth-as-a-computing-substrate is humans-as-brainwashed-consumers. Market forces will push for AIs to write text which draws eyeballs. Those models won't care about truth, ethics, or much of anything other than getting you addicted to reading what they write (or watching what they create). At that point, we can destroy ourselves just fine.
But even more likely is something no one has thought of.
AIs don't have emotions, motivations or goals.
They don't pretend, because pretending implies intent, they don't have intent. They do what they're created to do.
Humans are already brainwashed consumers. Welcome to marketing/advertising and late-stage capitalism. The ability of human beings to do what you're describing is much more effective than that of AI at present, ergo the "danger" has been here for decades. Smoking? Junk food? Radium water? Fast fashion? Equestrian ivermectin? Shall I continue?
From a humanist / secular perspective, humans evolved to make babies. Emotions are an emergent behavior to maximize the number of babies made, and their survival. Nothing less, and nothing more.
What analogues emerge when we train machines not to survive but to complete text?
We have no idea.
There's the "ghost in the machine" crowd, the sentient machine crowd, and the mechanical machine crowd. None have presented any compelling evidence, but all speak with complete confidence in their hypotheses.
1) The first AGI we create will immediately break free of our control
2) It will either have already been given, or will find some way to take, control of physical systems
3) It will create the Singularity
4) Its goals will be to advance itself at our expense
None of these are remotely givens. Even if we grant that AGI is possible with our current level of technology (which is also not at all a given), the Singularity is nothing but science fiction. It's an interesting idea, but there's no real reason to think it's close to what an AGI's capabilities would be like in reality.
Commander Data, Hal 9000 (a bad guy but not that bad) Asimov's droids, even the all powerful A.I. of Neuromancer has no particular ill will towards humans.
Belief in AGI hardly means you need to believe in Armageddon.