I think a common issue in LLM discussions is a confusion between author and character. Much of this confusion is deliberately encouraged by those companies in how they designed their systems.
The real-world LLM takes documents and make them longer, while we humans are busy anthropomorphizing the fictional characters that appear in those documents. Our normal tendency to fake-believe in characters from books is turbocharged when it's an interactive story, and we start to think that the choose-your-own adventure character exists somewhere on the other side of the screen.
> how is that different from "emulating human behavior?"
Suppose I created a program that generated stories with a Klingon character, and all the real-humans agree it gives impressive output, with cohesive dialogue, understandable motivations, references to in-universe lore, etc.
It wouldn't be entirely wrong to say that the program has "emulated a Klingon", but it isn't quite right either: Can you emulate something that doesn't exist in the real world?
It may be better to say that my program has emulated a particular kind of output which we would normally get from a Star Trek writer.