The Myth Of Artificial Intelligence

PrintPrintEmailEmail

Marvin Minsky, the head of the artificial intelligence laboratory at MIT, proclaimed in I 1967 that “within a generation the problem of creating ‘artificial intelligence’ will be substantially solved.” He was cocky enough to add, “Within 10 years computers won’t even keep us as pets.” Around the same time, Herbert Simon, another prominent computer scientist, promised that by 1985 “machines will be capable of doing any work that a man can do.”

That’s hardly what they’re saying nowadays. By 1982 Minsky was admitting, “The AI problem is one of the hardest science has ever undertaken.” And a recent roundtable of leading figures in the field produced remarks like, “AI as science moves very slowly, revealing what the problems are and why all the plausible mechanisms are inadequate,” and “Today, it is hard to see how we would have missed the vast complexities.” How did we come—or retreat—so far?

It all began in 1950, when the British mathematician Alan Turing wrote a paper in the journal Mind arguing that to ask whether a computer could think was “too meaningless to deserve discussion,” but proposing an alternative: a test to see if a computer could maintain a dialogue in which it convincingly passed for human. He predicted that “in about fifty years’ time it will be possible … to make [computers] play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.” It was a time when computers were new and magical and seemed to have limitless possibilities. Turing ended his paper with “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

Plenty of people were ready to do it too. In 1955 Alien Newell and Herbert Simon, at the RAND Corporation, showed that computers could manipulate not just numbers but symbols for anything, such as features of the real world, and therefore could handle any kind of. problem that could be reduced to calculation. They then went to work on a General Problem Solver that could resolve any kind of difficulty susceptible to rules of thumb such as humans were generally believed to use. They gave that up as overambitious in 1967, but before then their work had helped inspire a host of other undertakings, the main ones at the lab at MIT under Minsky, where in 1965 a researcher named Terry Winograd developed a program that could move images of colored blocks on a computer screen in response to English-language commands. People also worked on programs to hold ordinary conversations, as Turing had suggested, and they saw many early signs of promise.

By the 1970s the young field was running into trouble. Nobody could come close to making a computer understand the sentences in a simple children’s story with the comprehension of a four-year-old. As researchers reached dead ends, they began to narrow their focus and limit their goals, working just on vision, or just on building robots that could move responsively, or on “expert systems” that could compute with a variety of information in a specific field, such as medical diagnostics.

A host of industrial applications emerged in the 1980s; a few succeeded, especially systems able to distinguish between objects in front of them, but most didn’t. In 1989 the Pentagon dropped a project to build a “smart truck” that could operate on its own on a battlefield. A milestone of artificial intelligence did appear to be reached in 1997, when IBM’s Deep Blue computer beat Carry Kasparov in a chess match, but after the dust had settled, most people looked on the feat as simply a demonstration that the game could be reduced to a mass of complex calculations.

Today the world is full of practical applications that are sometimes called artificial intelligence. These include reading machines for the blind, speech-recognition devices, and computer programs that detect financial fraud by noticing irregular behavior or that automate manufacturing schedules in response to changes in supply and demand. How intelligent are these compared with Turing’s original dream? Consider one often-cited example of a successful AI application, the Microsoft Office Assistant. That’s the cartoon computer that comes up and waves at you while you try to work in Microsoft Word. It uses something called a Bayesian belief network to guess when you need help and why. Everyone I know who encounters it just wants it to go away.

Work aimed at emulating the functioning of the brain still goes on in pure research, but mostly without the old optimism, and the effort is pretty much split in two. People are now trying to crack the problem either from the very top down or from the very bottom up. Top down means by trying to duplicate the results of human thought, typically by building up vast reserves of “commonsense” knowledge and then figuring out how to compute with it all, or by continuing to try to write programs to hold conversation, without first figuring out how the brain does it. Bottom up means by designing “neural nets,” computer versions of the basic biological connections that brains are made of, and attempting to make them grow and learn.