Of course these models aren't dangerous - but that's becoming less and less certain as their capabilities develop. I'm not even how many more GPT's to go until we reach truly dangerous levels.
Imagine the Red Team for GPT-10 asks it to "convince me to help you take over the world" and it basilisks him successfully...
Seeing how the input for the commercial, plebian version seems to filter out "problematic" sources, we can be sure that none of us will get access to something like that. Governments have lists of "questionable", divergent people and their writings. They are free to train on them. I'd even go as far to say that most of these systems have been livetested and that they've only been released after years of steady Q&A and a lot of back and forth with the powers-that-be.
I feel like our jobs may be in danger, at least you'll have teams overseas using the tool a lot to produce code which maybe isn't great quality but who cares, but we're not in danger this way.
Law records what most people have expressed is desirable behavior in society.
Presumably the extremely adjacent concept of knowledge which may be encoded into DNA may also be thought of in this way, so that even under "the law of the jungle".
DNA records what forms can survive and replicate in a given environment.
... "AI takeover" is already the status quo.