This is very untrue. Here is a list of psychotherapy modalities: https://en.wikipedia.org/wiki/List_of_psychotherapies. In most (almost every) modalities, the therapist provides an intervention and offers advice (by definition: guidance, recommendations).
There is Carl Rogers' client-centered therapy, non-directive supportive therapy, and that's it for low-intervention modalities off the top of my head. Two out of over a hundred. Hardly "most" at all.
https://en.m.wikipedia.org/wiki/Person-centered_therapy
That sounds an awful lot like what current gen AIs are capable of.
I believe we are in the very early stages of AI-assisted therapy, much like the early days of psychology itself. Before we understood what was generally acceptable and what was not, it was a Wild West with medical practitioners employing harmful techniques such as lobotomy.
Because there are no standards on what constitutes an emotional support AI, or any agreed upon expectations from them, we can only go by what it seems to be capable of. And it seems to be capable of talking intelligently and logically with deep empathy. A rubber ducky 2.0 that can organize your thoughts and even infer meaning from them on demand.
They will not under any circumstances tell you that “yes you are correct, Billy would be more likely to love you if you drop 30 more pounds by throwing up after eating”, but an LLM will if it goes off script.
This is an implementation problem and not really a technical limitation. If anything, by focusing on a particular domain (like therapy), the do’s and don’ts become more clear.
There is a very fine line between being understanding and supportive and enabling bad behavior. I’m not confident that a team of LLMs is going to be able to walk that line consistently anytime soon.
We can’t even get code generating LLMs to stop hallucinating APIs and code is a much narrower domain than therapy.
How can you so confidently claim that "Therapists will do this and that, they won't do any evil". Did you even read what you posted?