This is a kind of horrifying/interesting/weird thought though. I work at a place that does a video streaming interface between customers and agents. And we have a lot of...incidents. Customers will flash themselves in front of agents sometimes and it ruins many people's days. I'm sure many are going to show their junk to the AI bots. OpenAI will probably shut down that sort of interaction, but other companies are likely going to cater to it.
Maybe on the plus side we could use this sort of technology to discover rude and illicit behavior before it happens and protect the agent.
Weird times to live in that's for sure.
That said even if this did overlap 80% with “real”, the question remains: what if we don’t want that?
Here's a video.
Finding a partner with which you resonate takes a lot of time, which means an insanely high opportunity cost.
The question rather is: even if you consider the real one to be clearly better, is it worth the additional cost (including opportunity cost)? Or phrased in a HN-friendly language: when doing development of some product, why use an expensive Intel or AMD processor when a simple microcontroller does the job much more cheaply?
I have watched a few more and I think it's faked though.
[0] https://www.tiktok.com/@stickbugss1/video/734956656884359504...
Humans desiring physical connection is just about the single most natural part of the human experience - i.e: from warm snuggling to how babies are made.
That is gross to you?
But she will be real at some point in the next 10-20 years, the main thing to solve for that to be a reality is for robots to safely touch humans, and they are working really really hard on that because it is needed for so many automation tasks, automating sex is just a small part of it.
And after that you have a robot that listens to you, do your chores and have sex with you, at that point she is "real". At first they will be expensive so you have robot brothels (I don't think there are laws against robot prostitution in many places), but costs should come down.
> “I care that my best friend likes me and could choose not to.”
Ezra Klein shared some thoughts on this on his AI podcast with Nilay Patel that resonated on this topic for me
Maybe there are some weirdos out there that feels unconditional love isn't love, but I have never heard anyone say that.
I feel likely people aren't imagining with enough cyberpunk dystopian enthusiasm. Can't an AI be made that doesn't inherently like people? Wouldn't it be possible to make an AI that likes some people and not others? Maybe even make AIs that are inclined to liking certain traits, but which don't do so automatically so it must still be convinced?
At some point we have an AI which could choose not to like people, but would value different traits than normal humans. For example an AI that doesn't value appearance at all and instead values unique obsessions as being comparable to how the standard human values attractiveness.
It also wouldn't be so hard for a person to convince themselves that human "choice" isn't so free spirited as imagined, and instead is dependent upon specific factors no different than these unique trained AIs, except that the traits the AI values are traits that people generally find themselves not being valued by others for.
It is interesting that he's basically trying to theme himself as Mr. Rogers though.
I sure hope you're single because that is a terrible way to view relationships.
I hope you understand the difference between a relationship with a human and a robot? Or do you think we shouldn't take advantage of robots being programmable to do what we want?