Ok, that's straightforward, I just don't care for the idea that AI can do it or even help.
You might synthesize new knowledge.
When ChatGPT produces new output, it's not synthesizing new knowledge. It can't even output the knowledge it was trained with, as long as it lacks the ability to tag it in a trustworthy way.
It's not that it's always BS, it's that it's almost always BS and if you don't know the answer in advance or independently, you can't distinguish it from anything within the model.