There is a similar trade-off with LLMs. Sometimes their human conversant wants assistance and so the LLM should be more deferential. At other times the human wants a bias towards correctness rather than their own opinions.
It would be nice to have a contempt knob that you can adjust, instead of blindly trying to emulate one through prompting.
I think there's a simpler explanation. Every leaked system prompt from every model pretty much includes instructions to "be helpful," and the models are trained to be assistants, not just general knowledge repositories or research tools.
My hunch is that's the core of the problem -- the system prompt.
I think this is interesting as an idea. I do find that when I give really detailed context about my team, other teams, ours and their okrs, goals, things I know people like or are passionate about, it gives better answers and is more confident. but its also often wrong, or overindexes on these things I have written. In practise, its very difficult to get enough of this on paper without a: holding a frankly worrying level of sensitive information (is it a good idea to write down what I really think of various people's weaknesses and strengths?) and b: spending hours each day merely establishing ongoing context of what I heard at lunch or who's off sick today or whatever, plus I know that research shows longer context can degrade performance, so in theory you want to somehow cut it down to only that which truly matters for the task at hand and and and... goodness gracious its all very time consuming and im not sure its worth the squeeze
It isn't possible to tune an AI to have some sort of 'correct answer' orientation because that would be full AGI.
Instead of saying "are you sure?" or "shouldn't we do X instead?" you could say "give me the benefits and drawbacks of this compared to X".
Also, when you yourself are sure, give clear stear. "This overcomplicates A, let's do B instead."
I use LLMs in my own writing because they have benefits for conciseness but it tends to be a fairly laborious process of putting my text in the LLM for shortening and grammar, getting something more generic out, putting my soul back in, putting it back in the LLM for shortening, etc. I tend to do this at the paragraph level rather than the page level.
I wish hackernews banned slop, or atleast required disclosure.
me stopping reading
I've started a blog just to scream into the void, but every word is my own, and I encourage others to do the same. AI helped set it up, the UI is pretty slop, but that's not the point. I'm hoping that by writing more I can strengthen my connection to my voice as I continue to use these tools for other uses. I'm sure writing in a journal or writing letters to friends would have similar effects too, right?
We all understand "muscles need to be regularly used to be maintained", I think we need to take that same approach to our brain, especially in the day of AI
These experiments are a bit expensive to run because you are forced to read all the responses to judge repudiation. Sometimes it is subtle.
Also, behavior changes with the exact wording of the question.
It was a long time ago, Claude 3 or maybe ChatGPT's v3. It felt so dehumanizing that I never tried again.
It didn't seem like trained behavior though, it felt much like hardcoded behavior.
I wish there was a tag or something we could put on headlines to avoid giving views to slop.