> some humans actually do write like these newer language models, after all
Wouldn't it be more accurate to say these newer language models actually write like humans? Or is there a subset of the population intentionally trying to write in the way that these language models write.
It seems to have a tendency to write stereotypical preamble-statement-conclusion paragraphs and to repeat itself. The model repeats itself often. It repeats the title and then writes a statement that basically repeats the title and after that it provides the useful nugget. At the end it adds a sentence usually using "overall" as an opener. Overall, the model tends to respond in a
stereotypical format
Sure, you can also say that some language models write like humans - however, even pre-GPT-2, I read several high-schooler's essays that read very much like these ML-generated products, so even if you don't believe that the relationship is transitive, I think you can say that the relationship is true in both directions.