Yes, training the model is where it learns, not in prompts. Prompting might be considered meta-learning but it will always need a reference point given to it from its training data, and beyond the prompt the original model is never altered.
Eh, one could argue that this is similar to the short term/long term memory divide in humans. We tend to suck at new things until we sleep on it and update our weights...