It isn’t pointless.
The author cited research that demonstrates that model collapse can happen on a small scale.
The author also cited sources that a larger and larger portion of the web will be written by language models.
There are already studies showing that LLM generated text is less diverse than human generated text:
https://techxplore.com/news/2026-03-llms-creativity-ai-respo...
https://arxiv.org/html/2501.19361
The studies don’t show that the lack of creativity in LLMs is caused by model collapse or that the problem is getting worse.
But 1) we know they do this and 2) we know that training on synthetic data can cause model collapse.