Why would you limit a model to be like a brain in a vat? Instead let the model out so people use it, then use the chat logs to fine-tune. A chat room is a kind of environment, there is a human, maybe some tools. The LLM text will generate feedback and right there is a learning signal.
Even without a human, if a LLM has access to code execution it can practice solving coding tasks with runtime feedback. There are many ways a LLM could obtain useful learning signals. After all, we got all our knowledge from the environment as well, in the end there is no other source for knowledge and skills.