Psychology actually recognizes different kinds of empathy.
(Before Tim Leary became the Acid Guru he did groundbreaking work in group psychotherapy, and in particular wrote a book about personality diagnosis based on group interactions. As a parlour game you can apply the same principles to personality diagnosis of the cultures which emerge around software languages and tools.)
So a falsifiable hypothesis might be:
Sociopaths have no empathy at all. Their perception that some other empathizes with them or that their empathy with the other is recognized by the other is some random function based on their internal dialogue and whether they think they get a better cookie by stating that empathy exists when prompted by the experimenter. By comparison, people who score higher on some Empathy Scale (tm) state that empathy exists at a higher or lower rate than the sociopathic control group.
HTH. I don't claim that hypothesis personally, as stated that's what I was maybe hoping to find. I'm just a programmer, whether that's programming computers or minds is sometimes unclear.
"He who can destroy a thing has the real control of it."
Artificial systems hence show similarities to what common sense calls a psychopath.
Ugh. That's so much motivated reasoning, it might start to fall in the "not even wrong" category. I have seen the same semantic trick before, to declare - purely by playing with definitions, without any empirical observations - that animals cannot have emotions.
A lot of the vocabulary around consciousness and emotions is defined on top of human subjective experiences, simply because those are the only ones we have access to and where we definitely know they exist. However, this means that those terms are literally only applicable to humans and not animals (or AI), because we literally only define them for humans. That's the core problematic of the "we don't know what's it like to be a bat" essay and the reasons we have words like "nociception" to describe "neurological pain responses" in animals without making any implications about any subjective experience of pain.
It's important to stress that none of that means that we'd know that animals don't feel pain or don't have emotions or don't have conscious thought, etc. It just means that the terms become un-applicable to animals for formal reasons. However, the confusion between saying we don't (and possibly can't ever) know and we know they don't is often a very convenient one, especially if you wanted to inflict things on animals that would definitely cause pain and suffering if they had a consciousness.
For animals, the situation has luckily somewhat changed in the last decades and more scientists are calling for instead adopting the assumption that (many) animals do have a consciousness - not a human one, but one that is comparable to humans in core aspects, such as to experience pain. (See "Cambridge Declaration on Consciousness")
I feel we're at a similar danger with AI: I don't want to say that LLMs have consciousness - and we can be sure they don't have human consciousness, that's impossible from the way the work. However, the article confuses a lot of "don't know"/"not applicable" with "we know they don't" (and then brings a number of other terms into the mix, that, paradoxically, would require human consciousness to even be applicable) to conclude something like psychopathy.
You don't have to buy into any and all AGI fantasies, but this is intellectually dishonest.
It doesn't claim that it's fundamentally impossible for an AI to feel pain or emotions, but rather that it doesn't genuinely empathize with those of a human. A language model that has been tuned to produce empathetic-sounding responses has just learned to mimic empathy without actually sharing the human experience.
I don't see motivated reasoning in that. I don't think I even see the supposed motivation; I don't think the article is primarily trying to argue against the possibility of sentient AI in the first place.
The article uses the fact that the term "empathy" isn't even well-defined for machines to conclude that machines cannot feel empathy. So far, so problematic - but then it turns around and suddenly treats machines like humans that cannot feel empathy, which we call psychopaths.
I can understand the context: It's indeed a borderline psychopathic decision to replace human care workers with machines, but that's not caused by the machines. It's insincere reasoning here.