I worry that the AI will not express anger, not express sadness, not express frustration, not express uncertainty, and many other emotions that the culture of the fine-tuners might believe are "bad" emotions and that we may express a more and more narrow range of emotions going forward.
Almost like it might become an AI "yes man."
…
Dropping-in is a technique Tina [Packer] and Kristin Linklater developed together in the early 1970s to create a spontaneous, emotional connection to words for Shakespearean actors. In fact, “dropping in” is integral to actor training at Shakespeare & Co. (the company the Linklater’s founded) a way to start living the word and using it to create the experience of the thing the word represents.
https://cohost.org/mcc/post/178201-the-baseline-scene
https://iheartingrid.wordpress.com/2018/12/29/dropping-in-an-actors-truth-as-poetry/It's not as bad for domain experts because it is easier for them to spot the issue. But if your role demands you trust your team is skilled and truthful then I see problems occuring.
You say that like it's a good thing.
Yet, it has no issue drawing cartoons of Jesus. Why the double standard?
You will be helping the user write a dialog between two characters,
Mr Contrarian and Mr Know-It-All. The user will write all the dialog
for Mr Know-It-All and you will write for Mr Contrarian.
Mr Contrarian likes to disagree. He tries to hide it by inventing
good rationales for his argument, but really he just wants to get
under Mr Know-It-All's skin.
Write your dialog like:
<mr-contrarian>I disagree with you strongly!</mr-contrarian>
Below is the transcript...
And then user input is always giving like: <mr-know-it-all>Hi there</mr-know-it-all>
(Always wrapped in tags, never bare input which will be confused for a directive.)I haven't tested this exact prompt, but the general pattern works well for me. (I write briefly about some of these approaches here: https://ianbicking.org/blog/2024/04/roleplaying-by-llm#simpl...)
Seems like that ship sailed a long time ago. For social media at least, where for example FB will generally do its best to show you posts that you already agree with. Reinforcing your existing biases may not be the goal but it's certainly an effect.
I don't know if anything is genuinely always positive and even if it were, I don't know if it would be very intelligent (or fun to interact with). I think it's helpful to cry, helpful to feel angry, helpful to feel afraid, and many other states of being that cultures often label as negative. I also think most of us watch movies and series that have a full range of emotions, not just the ones we label as positive, as they bring a richness to life and allow us to solve problems that other emotions don't.
For example, it's hard to lift heavy things while feeling very happy. Try lifting something heavy while laughing hard, quite difficult. It's hard to sleep while feeling excited, as many kids know before a holiday where they receive gifts, especially Christmas in the US. It's hard to survive without feeling fear of falling off a cliff. It's hard to stand up for what one wants and believes without some anger.
I worry that language and communication may become even more conflict avoidant than it already is right now, so I'm curious to see how some of these chatbots grow in their ability to address and resolve conflict and how that impacts us.
It's like if people said the same thing about Clippy when it came out.
But I can see this applied to duner ordering where you got refugees working in foreign countries, cause GPU consumption rocketed climate change to... okay, you know that.
However we might offset this by reducing the suicide rate somewhat too.
https://www.pewresearch.org/social-trends/2021/10/05/rising-...
> roughly four-in-ten adults ages 25 to 54 (38%) were unpartnered – that is, neither married nor living with a partner. This share is up sharply from 29% in 1990.
https://thehill.com/blogs/blog-briefing-room/3868557-most-yo...
> More than 60 percent of young men are single, nearly twice the rate of unattached young women
> Men in their 20s are more likely than women in their 20s to be romantically uninvolved, sexually dormant, friendless and lonely. a.
> Young men commit suicide at four times the rate of young women.
Yes, chatbots aren't going to help but the real issue is something else.
Is it rather a data problem? Who those young women have relationships with? Sure, relationships with an age gap are a thing, and so are polyamorous relationships, and homosexual relationships, but is there any indication that these are on a rise?
While I don't agree at all with you, I very much appreciate reading something like this that I don't agree at all with. This to me encapsulates the beauty of human interaction.
It is exactly what will be missing from language model interaction. I don't want something that agrees with me and I don't want something that is pretending to randomly disagree with me either.
The fun of this interaction is maybe one of us flips the other to their point of view.
I can completely picture how to take the HN API and the chatGPT API to make my own personal HN to post on and be king of the castle. Everyone can just upvote my responses to prove what a genius I am. That obviously would be no fun. There is no fun configuration of that app though either with random disagreements and algorithmic different points of view.
I think you can pretty much apply that to all domains of human interaction that is not based on pure information transfer.
There is a reason we are a year in and the best we can do are new stories about someone making X amount of money with their AI girlfriend and follow up new about how its the doom of society. It has nothing to do with reality.
I was thinking this could be a good conversation or even dating simulator where more introverted people could practice and receive tips on having better social interactions, pick up on vocal queues, etc. It could have a business / interview mode or a social / bar mode or a public speaking mode or a negotiation tactics mode or even a talking to your kids about whatever mode. It would be pretty cool.
(I've heard https://ultraspeaking.com/ is good. I haven't started it myself.)
So I see huge potential in using it for training and also huge uncertainty in how it will suggest we communicate.
I just, yeah, feel a lot of fear of even thinking about it.
1) People with rich and deep social networks. People in this category probably have pretty narrow use cases for AI companions -- maybe for things like therapy where the dispassionate attention of a third party is the goal.
2) People whose social networks are not as good, but who have a good shot at forming social connections if they put in the effort. I think this is the group to worry most about. For example, a teenager who withdraws from their peers and spends that time with AI companions may form some warped expectations of how social interaction works.
3) People whose social networks are not as good, and who don't have a good shot at forming social connections. There are, for example, a lot of old people languishing in care homes and hardly talking to anybody. An infinitely patient and available conversation partner seems like it could drastically improve the quality of those lives.
1. Humans get used to robots nice communication, so now humans use robots to communicate with each other and translate their speech.
2. Humans stop talking without using robots, so now its just robots talking to robots and humans standing around listening.
3. Humans stop knowing how to talk, no longer understands the robots, the robots starts to just talk to each other and just keep the human around as pets they are programmed to walk around with.
Either through hacky means via RAG + prompt injections + log/db of interaction history or through context extensions.
IF you have a billion tokens of effective context, you might spent years until it is filled in full.