And, if you are wrong about what you said it was wrong about, it'll still almost always say how right you are.
they all lean to be agreeable out the box, but the aforementioned two will stick to their guns harder and tell you that you're wrong. you have to ask all of them to take the other side for more insight.
with ChatGPT4, for example, I posted a conversation where I felt that a woman I was dating gave a response to my followup that was way negative and way out of left field. It told me she had a disproportionally negative response to a benign text. Then in another session I posted my post and told it to predict her response, and it predicted a variety of responses some of which were like the woman's and this time it told me why. This means it was being too agreeable and affirming my feelings the first time, unprompted, while actually giving insight in the second session without knowing there was an existing reaction to navigate.
(Analogous to how we have to patch our speech to never blame the victim even though we know there are measures they could have taken to mitigate that scenario. While if the same person asks in advance we would give them advice.)
Dumbfounded that its predictive qualities were better than its affirmation-by-default trait, I told it to act like the woman's friends who have no context of me if they saw my message, I told it to act like redditors on /r/relationship_advice responding to the woman who similarly have no context beyond what OP feels. you have to create outside observers, and you can run all of these alternate realities within 3 minutes. It will begin crafting responses that break the conversation molds you might be more familiar with and get better results, but if all that sounds too much, you can simply tell it to disagree with you.
in LM Studio you can modify the system prompt and change the temperament
LLMs are great here. My concerns are mostly that its ability for factual accuracy is confused with its own expressed confidence. Assessment of one's owns abilities is usually optimistic in humans. LLMs just model that language. Facts are baked in as linguistic patterns, rather than a knowledge machine. And, you can often invert its answer to a scary degree.
There was recently an article on Forbes, where the journalistic "source" was the output of Bard.
The most terrifying part about LLMs to me, is not that it's sometimes wrong. But that it's often enough right and excels at certain stuff, that we use it as a source of knowledge. It really isn't.
On the other hand, the creative part, or dialogues regarding creative processes. It's mind bogglingly amazing.