Let's say that this machine breaks out and decides to take over the world to acheive whatever stupid task we've given it (I'm yet to see an AI ethics paper that doesn't exclusively deal in stupid tasks), do we really think that the AI is going to figure out it's going to need to ensure it stabilizes the power grid before it starts exponentially drawing power to collect buttons or whatever it is?
What is much more likely from our experience is that a basic AGI is going to do something really dumb. And then when we reboot it it'll do something dumb 1000x more times. And maybe eventually it'll do something almost as smart as us. This is what we term childhood.
I think a core part of these ethical papers is that you suppose an AI is generally intelligent, and that it is much more intelligent than anything plausible and never needs to learn from experience. Let me tell you, I'm pretty generally intelligent and I've not taken over the world more times than I can count.
There are a couple problems with your objection. For example, even in your imagination, the AI is just given "1000x times" to keep learning, at which point, even with the weaknesses you believe the AI will have, it will presumably be more intelligent and capable than us, so really, your objection doesn't refute the ultimate conclusion I mentioned above, your objection just claims that there will be an additional phase before a technically dangerous AI exists.
A bigger problem with your objection though is that your reasoning applies to natural intelligence less than or equal to our own. Your reasoning does not apply to artificial intelligence, which may behave very differently, and it doesn't apply to intelligence much greater than our own. GPT-3, for example, when it was done training, exceeded human ability instantly in many respects - what human can translate as accurately between as many languages? Has as large a vocabulary? Writes as quickly? etc. Why wouldn't it be the case that a generally intelligent machine, on first use, is substantially more generally intelligent than a human?
Finally, I don't even think objections like this one meaningfully obstruct the ultimate conclusion - that AGI is fundamentally dangerous. You could just imagine that the people who are controlling the AGI, if they are able to, are people other than your preferred controllers. If a team of computer scientists develop AGI, and they manage to perfectly control it, and they take it through the "childhood" period that you imagine must exist, is that really any better? This team of researchers will be the new omnipotent rulers of humanity. Even without going to nanotech or exotic technology, we could imagine they just automate robot soldiers and surveillance with their aligned AGI, and the rest of humanity would be powerless against them. And there is no reason to rule out exotic technology, which might be possible, and might empower the future rulers of humanity to unimaginable levels.
Not only does AGI need to be aligned with its operators, the operators need to be aligned with humanity, and neither of those seem plausible.
If you can multiply rapidly and self-organize, you can outcompete humanity.
Last case scenario we can always find them and delete them. ?