>> "This isn't the same as experiencing the emotion and then finding a way to express it."
> How is it different?
I have two answers: one technical, the other philosophical.
Is emotion an emergent property that also impacts behavior? Or is an emotional reaction being mimicked in outputs?
There's a substantial practical difference between the two, from a purely engineering perspective.
Mimicking emotion in a chat bot seems much easier than building a program with something like "emotion" that impacts the program's behavior.
You'd expect to see the difference, in practice, even in marginally useful parlor trick programs like chatbots. You might observe absurd break-downs between emotional response and other aspects of behavior (e.g., a chatbot outputting to STDOUT a message about how painful something is to do while continuing to do that thing unencumbered, but then outputting a message to STDOUT about how joyful something is while not continuing to do that thing. Or outputting to STDOUT that a user makes it sad but then continuing to respond to other requests from that user as if everything is fine. Etc).
You'd also expect mimickry to be of limited utility whereas an actual emotional signal might be useful in an RL loop for example.
From a philosophical perspective, there is an obvious difference. I don't give a fuck if a GPU is sad but I do care if a human is sad. Just like 99.99999% of humanity. The opposite view -- that machine "emotions" are anything even remotely ontologically or morally similar to human emotions -- is extremely fringe.
Humanism isn't a logical fallacy. Or if it is, you'll never convince people. Most people eat meat from animals that are waaaaaaayyyyy far ahead of anything AI will achieve in our lifetimes.