I think the bot doesn't have access to this info.
Them: Glad to hear that! Tell me more.
Me: About what?
Them: Glad to hear that! Tell me more.
Me: OK, you're a bot.
Them: Glad to hear that! Tell me more.
Me: So we're just going to run down the clock?
(They disconnect. I guess bot. I'm wrong.)
The point is, this isn't really an interesting way to deceive people. It's easy to behave like an idiot, it's hard to sound intelligent. Humans and AIs can both act stupid. Only humans can act intelligently (so far). There needs to be an incentive to act intelligently. Otherwise the site owner could make their “AI” only ever say “Glad to hear that! Tell me more.” and then the human partner could always say ”Glad to hear that! Tell me more.” and it would be impossible for the human interlocutor tell which is which.
To avoid this failure mode, you have to provide players incentives to prove that they are human. Like in the game Mafia/Werewolf. Imagine you have a chat room that is filled with 3 humans and 3 bots. People chat for a while, then everyone votes on a person to kick from the room. This would be more interesting because now the humans will be eager to prove to each other that they are humans. The bots can spam "Glad to hear that! Tell me more." but that will just get them kicked.
This is also more aligned with Turing's original idea.
I agree with you that it's easier for a human to imitate a (bad or inadequate) bot. You could also, for example, run your own copy of Eliza and proxy the conversation to that, or even memorize some of Eliza's rules and literally apply them by hand in your conversation. You would basically always convince people that you're a bot.
Since the human role is understood to be the harder one to implement, having everyone attempt to play it is the most incentive-compatible solution to a contest: it encourages all participants to best demonstrate their abilities instead of concealing them.
Gonna take a walk.
1. You start the game with someone you know 2. You know that the first 5 messages you each send are authored by the other person 3. After 5 messages each, the "You are a bot!" button becomes active, and at some random point after that, the players get split so they are instead each talking to a bot.
The time pressure of when to consider pressing the button really changes the psychological aspect, as does having a lead-in transcript for the bot to try to impersonate your friend. As others have mentioned, an incentive to have people be more human would be great -- in a tournament setting I'd like to see this implemented where if someone guesses prematurely then they get -1 while the other player gets 0, whereas guessing correctly gives +1 to the guesser and 0 to the other.
Anyway, my prompting and the game could definitely use work, but for anyone who wants to try it out: https://artifice.games/ The game is called Bot or Not, after you create the game have your friend hit join a game and use the 4 letter room code.
Both sides MUST attempt to sound intelligent. When time is up, the options should be: Human, AI, Wasn't trying.
Case in point, my last chat:
Me: What's your favorite book?
Bot: A book? That's a tough one.
Me: You've never read a book?
Bot: Of course I've read books!
Me: So what's your favorite one?
Bot: Ah, that's tough.
Me: OK, you're a bot.
Bot: Whoa, hold on there! I'm not a bot, I'm as real as they come.
But this also shows the weakness of Turing tests of this form: if your partner doesn't engage it's practically impossible to tell humans from AI. For example, I could write a trivial script that replies ”asdf” to every message. I could hire a human to do the same. Now it's literally impossible for people to tell the human and AI apart. Has this proven anything interesting about the power of AI? Of course not. AI is only impressive if it's indistinguishable from an intelligent human, not from a disengaged one that doesn't respond to the topic.
Whereas if it's "is this a human attempting to work out whether you are human, or not that" is more interesting.
ME: Hello there, I'm Charles.
OTHER: sup dude whats up?
ME: Not much, just bored, trying out this web site. Where in the world are you
OTHER: im in md usa what about you?
ME: Chicago. It's smoky here from the wildfires. What's going on in Maryland today?
OTHER: idk not much really its prettyboring
ME: you near DC?
OTHER: yeah kinda but its still boring lol
ME: if you are a bot i am gonna be mad
And you certainly shouldn't dress your censorship up in insufferably smarmy language like “Oh shoot! Something went wrong, sorry about that!”
No, you fucking cocksucker. You banned me because I called my conversational partner a cunt to see how they would respond. Nothing went wrong, you aren't sorry about it, and “shoot” is a childish way to write “shit”.