For example, if one entity generated a large portion of the content, they could easily introduce a bias in these comments, to sway the opinion of readers. Automated astroturfing.
It doesn't even have to be nefarious. Just imagine a thread where 90% of the comments are repeating variations of the model's ideas in slightly different ways, and only 10% of the other responses are from humans. Even if the AI responses are considered 'good'... it gets overwhelming to the point that why should normal people even comment as the likelihood of it being read by another human and useful or even seen drops to nothing. I might as well open up a blank text file and make this comment and then not save it and close it as that would be seen by the same number of people in the limit.