> this might not be a problem that is solvable even with more sophisticated intelligence
At some level you're probably right. I see prompt injection more like phishing than "injection". And in that vein, people fall for phishing every day. Even highly trained people. And, rarely, even highly capable and credentialed security experts.
"llm phishing" is a much better way to think about this than prompt injection. I'm going to start using that and your reasoning when trying to communicate this to staff in my company's security practice.