I wonder if code becomes so complex that humans cannot understand, there may be a way to fool the LLMs into creating backdoors in the software.
For example, a open source LLM is produced, used everywhere, and it subtly inserts some malicious coding. Not saying this is happening now, but could happen.
Then when AGI comes along, this would shift to understanding the motivations of the AI and how they align with human ethics.