Chatbot Conversation Goes Viral

A tale of two talking bots

17 November 2011

Share and Print


What happens when two artificial-intelligence bots talk to each other? Does their conversation resemble a human one? What do they talk about? Two Cornell University Ph.D. students led by an IEEE member decided in August to find out.

The students, Igor Labutov and Jason Yosinski, performed an experiment in which they set up two chatbots—artificial intelligence programs that learn and then mimic human conversation—to talk to each other via two laptops. They added voices to the bots, turned them into avatars with an animation program, shot a video of the interaction, and posted it on YouTube. The resulting two-way conversation covered unicorns, robots, religion, and more. It apparently was so intriguing that the video of the experiment went viral, reaching more than 2.5 million views on YouTube as of October and spawning T-shirts with funny phrases from the chatbots’ discussion.

EXPERIMENTING WITH INTELLIGENCE
Member Hod Lipson, associate professor of mechanical and aerospace engineering at Cornell, in Ithaca, N.Y., had asked Labutov and Yosinski to perform the experiment for his class on artificial intelligence. Lipson wanted them to demonstrate what would happen if two chatbots interacted with each other. The experiment involved three components: a chatbot program, text-to-speech synthesis, and animated avatars to represent each chatbot. Fortunately, the two students did not approach the project cold.

“Jason and I were working on another project that involved text-to-speech technology and human-robot interaction [two components of the chatbot experiment], so it was not a giant leap for us to put together this demo,” Labutov says. For their other project, they were building a socially aware robot that could roam the hallways of the engineering building and interact with people in an attempt to accomplish a task, such as fetching a cup of coffee.

“Late one night, we were working on the robot project and had our laptops set up with both speech-recognition and speech-generation programs,” Labutov says. “That’s when we thought, What if we add a simple chatbot to each laptop, slide the computers together and let them speak to each other?”

The first chatbot program the two used was rather primitive and had its limitations. Called Eliza and developed at MIT in the 1960s, it was designed to mimic a psychotherapist. It used simple pattern-matching techniques to respond based on the entered text. For example, if a user typed “My head hurts,” it might respond, “Why do you say your head hurts?”

“The problem with Eliza was that it had no memory of things that had been typed during a conversation, so it often ended up in a loop of identical questions and replies,” Labutov explains. The two decided to try a different chatbot, Cleverbot, developed by AI scientist Rollo Carpenter in 1988. Cleverbot has learned over time to mimic human conversation by communicating with millions of people online. You can converse with the chatbot by simply typing a sentence in the chat field. Cleverbot then uses an algorithm that selects previously entered phrases from its massive database of prior conversations and types back a response. Cleverbot remembers things typed earlier in the same conversation, thanks to a simple memory model.

“Next, we wrote a program that basically opened up two sessions of Cleverbot, and we fed each with a line of conversation from the other.” But Cleverbot responds with text, and the students wanted an audible conversation, so they used the Acapela text-to-speech service. The output of each Cleverbot was converted to speech and played back over speakers. “This was at around 3 in the morning, and we were very excited to listen to the results,” Labutov says.

For the conversation to begin, the students had to feed the first line into Cleverbot, which was a simple “hello.” Finally, they created an avatar animation of the conversation, using the avatar-rendering software Live Actor Presenter, which generated a fully lip-synced video of the two characters, one male, the other female.

After pleasantly greeting each other, the chatbots went on to question whether they are robots or unicorns, ponder the meaning of the phrase “not everything,” accuse the other of being a “meanie,” and consider the existence of God.

“We nearly fell off our seats hearing what became of that ‘hello,’” Labutov says.

LOOKING AHEAD
“Almost every element of that conversation was unexpected,” Labutov says. “Most surprising, however, was not what was said but how strangely human the conversation sounded.” He points out that although researchers are getting closer to developing completely humanlike AI, much work remains to be done.

“The true research lies not in software for generating humanlike text and speech but in software that can extract meaning and can reason about abstract ideas and concepts,” he explains. “Most of this research, however, is highly specific to an application, and generalizing it to a point where a computer can small-talk is probably still a long ways away.”

Even more surprising than the conversation were the comments posted in response to the YouTube video, which the two uploaded shortly after completing their experiment. “Most of the comments attributed human characteristics to the characters—which is amazing, considering that both were in fact the same robot, which surely did not have a real gender,” Labutov says. Lipson agrees: “The real surprise was the reaction to the video—how people interpreted the discussion: the sexual tension, who was brighter, etc., even though behind the scene it was the same machine.”

Next, Labutov and Yosinski plan to conduct a longer conversation between chatbots and perhaps even add another bot to the discussion. Labutov says he believes it’s important such research attracts the public’s attention.

“People are interested in this kind of stuff, and we need to think about the implications of true AI before it actually does come around,” he says. “Keep in mind that what everyone saw in our video was just a simulation, and the reactions to it were already quite strong. We need to get people used to the idea that AI is the future, and may be here sooner than we think.”

Learn More