I don't think that's true. When chatgpt generates something that infringes (even on something not in the training data) it is still infringement and the output cannot be used by the user for anything they couldn't use the original for.
Luckily it doesn't do that often under normal use