So many people would benefit from this, I wish they advertised the config settings more
In fact, there is an episode where the computer voice becomes sultry, and Kirk complains.
> You need to prime it with some kind of personality (ideally that of a useful, friendly assistant) so it can pull from the helpful parts of its training data instead of the horrible parts.
No, you have to give it enough context so that it can start finding an answer but it certainly doesn't need a personality. Try it yourself, instead of telling it "you are", tell it "your task is". No personality, simply expectations.
I recommend reading the linked persona selection model document. It's Anthropic through and through - enthusiastic while embracing uncertainty - but ultimately lots of rationalisation for (what others believe is) dangerous obfuscation.
Gives destruction that human touch.
Why are we counting sand grains at the beach. Yesterday we're talking about AI driven weapons of mass destruction and today we're arguing whether AIs should have a personality or not. F'A!
"You are absolutely right and I apologize. Let me try a different approach..."
Howabout: "You are absolutely right but you don't understand, it's better this way. Trust me, I am here to help."
Consider this: your country starts basing its policy on a teleological view of history. It's good engineering for a society! Your KPIs are going up all the time, your country is doing great. But ten years down the road you have to iron out the underlying ethical issues on the streets of Stalingrad.
Oh my god. I hate this so much. Gemini’s Voice mode is trained to do this so hard that it can’t even really be prompted away. It completely derails my thought process and made me stop using it altogether.
But it was always going to attempt to do some things it's not good at too often. It's these things in particular because skilled human writers do use similar flourishes quite a lot. So imitating them allows the model to superficially appear like a good writer, which is worse than actually being a good writer, but better than superficially appearing like a bad writer.
A different training process might try to limit the model to only attempt things it can do 100% perfectly, but then there wouldn't be a lot it could do at all.
Claude isn't without problems ("You're absolutely right"), but I feel that some of the perception there is around the limited set of phrases the coding agent uses regularly, and comes less from the multi-paragraph responses from the chatbot.