1
Ask HN: Is Synthetic Data Just Repackaged Knowledge?
I often see synthetic data conflated as if it were equivalent to new real-world samples. In reality, when a model generates synthetic data from its own learned distribution, isn’t it just rearranging the information it already captured?
Could an expert explain—using principles of information theory—why synthetic data might still improve a model’s performance despite not providing genuinely ‘novel’ information