The Myth Of Artificial Intelligence

PrintPrintEmailEmail

Both approaches have come up against tremendous obstacles. A company named Cycorp began a project of gathering commonsense knowledge in 1995, aiming to help computers overcome the disadvantage of being unable to acquire all the information we get just from living in the world. The company has so far compiled a database of millions of descriptions and rules such as “ #$mother ANIM FEM) means that the #$Female Animal FEM is the female biological parent of the #$Animal ANIM.” In other words, it’s simply making a huge combination dictionary and encyclopedia. This reflects the fact that our brains contain great quantities of knowledge, but it reflects nothing about how we attain that knowledge, how we process it, or how we store it. And we certainly don’t store most of our knowledge, like how to walk, for instance, in declarative sentences or in binary computer code.

As for neural nets, scientists have been developing computerized versions that can learn, but again they face huge limitations. The man-made nets are extremely crude simulations, typically involving thousands of artificial neurons, whereas each of us has more than 10 billion real ones. Moreover, each of our neurons can be in any one of many different states, and they can rearrange their own wiring. They behave in far more complicated ways than any binary logic machine and in ways we still have only the feeblest understanding of.

The ever-receding dream of teaching a computer to hold a true conversation is still seducing people, however far off it remains. Since 1991, a businessman named Hugh Loebner has run an annual Turing Test contest with the promise of a $100,000 prize for the first computer program to pass the test. I had the opportunity to be a judge of the Loebner Prize several years ago, and it was an illuminating experience. (You can read the transcripts of several years’ competitions at www.loebner.net/Prizef/loebner-prize.html .)

I moved from terminal to terminal, each of which was assigned a specific topic of conversation, and 1 found I always knew after just a couple of questions whether a computer or a person was at the other end. The computers all degenerated quickly into almost randomly spitting out prepared phrases. (I initiated a conversation about pets: “Hello. What kind of pet do you have yourself?” “,I have two cats.” Me: “What are their names?” “My cats are called Spats and Bowtie.” Me: “How do you feel about declawing?” “What kind of pets did you have as a child?” Me: “Won’t you answer my question? How do you feel about declawing?” “Guinea pigs are about as smart as cucumbers.”)

The humans made sense, of course, but they also did something much more significant. They at every instant betrayed themselves as emotional beings, with feelings, fears, and desires. For instance, at a terminal where the subject was “Cooking,” 1 was told, “My favorite cuisine is oriental food, but I am an excellent Mexican food cook.” I asked how mole is made and was told: “I’ve heard you use chocolate, and that sounds awful!” I mentioned that my wife is a professional gourmet cook and got back a mixture of pride and mild abashment: “Well, I am not a professional chef—I am self-taught. I had to teach myself because I was married at an early age and it was sink or swim.”

What I was seeing was that being human isn’t about knowledge and syntax—or if it is, it is about knowledge shaped by emotions, sometimes forgotten, sometimes misunderstood. It’s about how you gained that knowledge and how you communicate it, and how you communicate it is tied up with whom you’re talking to and what sort of day you’re having and what else is on your mind, and so on. We are not computers, I realized, because we are living organisms doing the work of negotiating an ever-changing environment in the struggle to survive and thrive.

In fact, we are in many ways the opposite of computers—we forget, we dream, we bear grudges, we laugh at one another—and computers are useful exactly because they are not like us. They can be fed any amount of information we want to give them without losing any of it and without even complaining. They can remember perfectly and recall instantly. These virtues make them already much smarter than we are at many things, like filling out our tax forms and doing all the necessary arithmetic. But a computer that thinks like a human? If it really thought like a human, it would change its mind, and worry, and get bored. What would you want it for?