Strictly speaking I'm not sure if it does require learning if information representing the updated context is presented. Though it depends what you define as learning. ("You have tried this twice, and it's not working.") is often enough to get even current LLM's to try something else.
That said, your second paragraph is one of the best and most succinct ways of pointing out why current LLM's aren't yet close to AGI if though they sometimes feel like it's got the right idea.