The problem is that human intent will likely not be enough. Human intent even fails for humans. We set goals and objectives for others that are accomplished by means we did not intend.
This is ok for humans, as none of us are all powerful and we use our own feedback loops and decentralized decision making to make corrections.
However, this will not remain true if an AI system is given too much power over decision making. Such concept of alignment is not possible.
My full thoughts on this topic - https://dakara.substack.com/p/ai-singularity-the-hubris-trap