Sure, that particular prompt is ambiguous. Feel free to imagine it to be more of an informational question, even one asking for just yes/no.
However, when people are talking about the "critical flaw" in LLMs, of which this "tool shadowing" attack is an example of, they're talking about how the LLMs cannot differentiate between text that is supposed to give them instructions and text that is supposed to be just for reference.
Concretely, today, ask an LLM "when was Elvis born", something in your MCP stack might be poisoning the LLM content window and causing another MCP tool to leak your SSH keys. I don't think you can argue that the user intended for that.