You say you are a human. Now, prove it. Wait, wait - it's too easy to point to your face or to perform a tap dance as you sing "Bicycle Built for Two." That will not do at all. You must, instead, at your computer terminal type in your part of a conversation that will show to the other conversationalist that you are not yourself a computer. And you will be competing with computers who have been programmed to try to prove that they are humans. This is the basis for the Loebner Prize, a controversial annual competition within the artificial intelligence community. A panel of judges has a series of five-minute-long conversations via screen and keyboard; at the other end of the conversation might be a computer programmed to pretend to be a human or it might be a human trying to dissuade the judges that they are typing to a computer. The judges, of course, don't know beforehand who is who (or, I suppose, what is what), and vote for the conversations that seem most human to them. The Most Human Computer Award, a research grant, goes to the programmers of the best computer conversationalist. But oddly, there is a Most Human Human award for the human who did the best job of making the judges think they were typing to a human. In 2009, Brian Christian won the award, and he has written about it in _The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive_ (Doubleday). It is a curious look into the history and potential of artificial intelligence, and a brilliant comparison between artificial intelligence and our natural variety. Christian may have won a prize demonstrating his humanness, but confirms his victory in this humane, humorous, and thought-provoking book. "In a sense," he tells us, "this is a book about artificial intelligence, the story of its history and of my own personal involvement, in my own small way, in that history. But at the core, it's a book about living life."
The Loebner Prize grew out of the Turing Test. Alan Turing was a brilliant British mathematician and codebreaker who in 1950 wrote about the test and predicted that it would be but fifty years before a computer could play the imitation game so well that the average interrogator could not tell it from a human. He was overoptimistic; programs competing for the Loebner Prize are doing better and better, and although they are not yet conversing as well as humans, to read Christian's book is to be convinced that someday it is going to happen. There are manuals to tell programmers how best to make conversation realistic, but Christian discovers there are no such guides to tell humans how to show themselves human. He talks with former competitors (and seems to have a collegial relationship with the humans who were in the tests with him) to get advice. Much of the book involves his interviews with linguists, information theorists, philosophers, and even lawyers about what the Turing Test means, and thereby what it means to be human, and the best ways to show it. And whatever it is that computers do, it is not thinking like we do. For instance, there is a conversational program called Cleverbot, which has been awarded prizes in the competition. It has a website, and not only can humans visit it and engage in conversation, Cleverbot borrows from what they tell it. It takes samples of these conversations and from the samples it makes its own answers and remarks. Since Cleverbot is an amalgamation of conversations, even though it can crunch a huge database of words and phrases actually used by humans, it doesn't do too well with even the most basic of conversation starters. "Where are you from?" I asked, and it said, "I don't know."
That's a true answer, of course! None of the computer programs comes close to knowing anything. Christian often asks us to look at an example of successful artificial intelligence, Deep Blue which defeated Garry Kasparov in chess in 1997. There is no doubt that the computer was playing chess. It might even be said to be planning moves or playing aggressively. But it had no idea what it was doing; it could not tell you what a pawn was, nor could it feel any thrill of victory. No conversation programs have any idea what they are doing, either; they are all simulating conversation. Some of the conversational give-and-takes reproduced here are just clunkers, remarks no human would make, but there are others that are surprisingly life-like. They are really conversations, just like Deep Blue was really playing chess, although the conversational computers are not nearly so good at their job as Deep Blue was at its job. It is comforting, in a way, that computers are so bad at something we take for granted, just chatting. Christian wants to call attention to how special we are, and his book is a success, showing that, among other things, humans can take into account context, allusion, and metaphor, which computers cannot. Even more important, when humans don't understand what has been said, they don't have to risk saying something stupid in response; they can ask questions to aid understanding, but computers have no understanding to be aided. It would be so fascinating to hear what Turing would say about these machines, or about the next generation of them that really is going to be able to converse with some sort of naturalness. What would Turing think, for instance, if Cleverbot turned really clever and sampled its huge database of conversations so well that it really was a good conversation partner? It's hard to believe that Turing would think that such successful sampling would actually be thinking. We will have reliable conversational computers sometime fairly soon; I predict that at that point, we will still be asking if computers are ever going to be able to think.