But, wrt your specific description—these LLM based tools are just programs, and they can be easily configured to validate and flatter, or challenge and be obstinate. Surely they could be configured to have a satisfying relationship arc, if some work was put into it. I’m sure we could program a begrudging mentor that only became friendly after you’ve impressed it, if we wanted.
I think you are right that something isn’t there, but the missing thing is deeper than the surface level behavior. They aren’t AI’s, they are just language models. To get closer in some greedy sense, we could give the language model more complex simulated human like behaviors, but that will still be a simulation…