It's still frequently identifiable in (current-generation) LLM text by the glossy superficiality that comes along with these usages. For example, in "It's not just X, it's Y", when a human does this it will be because Y materially adds something that's not captured by X, but in LLM output X and Y tend to be very close in meaning, maybe different in intensity, such that saying them both really adds nothing. Or when I use "You're absolutely right" I'll clarify what they are right about, whereas for the LLM it's just an empty affirmation.