Even without all that, the agent would need mechanisms to protect itself that would also cause harm.
The scenario you suggest is so unlikely with all the protections that would be in place, that you would actually need someone with the goal of making LLMs behave maliciously for it to succeed at all. At the end of the day, it comes back to people and their goals.