This article is more than 2 years old
It was only a matter of time. We say this about a lot of developments here at GFR, but this is one I’ve been expecting to see for a long time: a computer has finally passed the Turing test.
The Turing test, named after computer scientist and WWII codebreaker (and pioneering homosexual) Alan Turing, was initially designed to gauge a computer’s ability to “think.” It involves fooling a human into thinking he’s messaging another human, when actually he’s messaging a machine. Those of us who have corresponded with Cleverbot or other chatbots know this effect, but at most we only pretended to be corresponding with a thinking entity. The Turing test has been so elusive because, even though some machines can fake human correspondence pretty well (LOLs, typos, and swearing help), no machine had yet met the standard set by Turing: fooling 30% of human judges during a five-minute text conversation. Until now.
A Russian computer program called Eugene Goostman fooled one third of the judges at the Royal Society in London into thinking it was a human — and it did so on the 60th anniversary of Turing’s death. It’s the first program to pass the test without choosing conversation topics and questions in advance. The program adopts the persona of a 13-year-old Ukrainian boy living in Odessa, which is pretty brilliant if you think about it. An average 13-year-old boy doesn’t have a full range of knowledge or social graces, so any blips in the program could just as easily be chalked up to teenage idiosyncrasies. The cultural differences also help explain anything unusual that might come up during conversation. One of the program’s creators said that they “spent a lot of time developing a character with a believable personality,” which clearly was a successful tactic. Indeed, if you visit the bot’s site, you’ll see an entire biography of “the poor guy who used be an ordinary boy until he was turned into a chatter-bot by his school computers teacher — in reality happened to be a malicious cyber-fairy.”
An earlier iteration of that program can be found online. I asked it a few questions just to test it out. First, “What is your favorite color?” Eugene Goostman responded “If I say I like white and hate black and yellow — will you sue me for that? :-)” Well, most 13-year-olds don’t bring up law suits, but the smiley face helps. I asked it about its favorite book, and it responded, One of my favorite books is Tales about Lenin, by Zoya Voskresenskaya. It’s a truthful and sincere story about all the shocking and disgusting life of the Soviet Union originator Lenin, especially about his childhood. Sensational book! Have you read Lenin?” Well, that’s a dead giveaway. Real teenagers don’t read!
While this is a giant leap for artificial intelligence, it will likely only fan the flames of concern for those who believe artificial intelligence development could eventually cause major problems for the human race. Kevin Warwick, a University of Reading cybernetics professor (and a cyborg himself) acknowledges that “Having a computer that can trick a human into thinking that someone, or even something, is a person we trust is a wake-up call to cybercrime.” But he also argues that the Turing test can help us prevent and deal with such threats by understanding how complex programs work and fool humans. Or maybe that’s just all part of AI’s plan.