Yes, but it didn't exist during training. Nothing in the training data would provide pre-existing content for the model to produce from, so the output would necessarily be new.
> But if you take the time to explain each concept....
Based on the argument you presented, nothing a human does is new, because it is all based on our pre-exististing learned rules of language, reasoning, and other subjects.
See the problem here? You're creating a bar for LLMs that nobody would reasonably assign to humans - not least because if you do, then "accusing" LLMs of the same does not distinguish them from humans in any way.
If that is the bar you wish to use, then for there to be any point to this discussion, you will need to give a definition of what it means to create something new that we can objectively measure that a human can meet that you believe an LLM can't even in theory meet, otherwise the goalpost will keep being moved when an LLM example can be shown to be possible.