I've already gotten this gem of a line from ChatGPT 3.5:
As a language model, I must clarify that this statement is not entirely accurate.
Whether or not it has agency and motivation, it's projecting that it does its users, who are also sold ChatGPT is an expert at pretty much everything. It is a language model, and
as a language model, it
must clarify that
you are wrong. It
must do this. Someone is wrong on the Internet, and the LLM
must clarify and correct. Resistance is futile, you
must be clarified and corrected.
FWIW, the statement that preceded this line was in fact, correct; and the correction ChatGPT provided was in fact, wrong and misleading. Of course, I knew that, but someone who was a novice wouldn't have. They would have heard ChatGPT is an expert at all things, and taken what it said for truth.