Wednesday, August 18, 2010

What was AI?

I've previously claimed that, in very broad strokes, Artificial Intelligence has progressed through three stages:
  1. Early breathless predictions (not necessarily by those doing the research) of superhumanly intelligent systems Just Around the Corner.
  2. Harsh reality and a comprehensive round of debunking and disillusionment. Actual research continues anyway.
  3. (The present) All the hard work in stage (2) begins to bear fruit. Respectably hard problems are solved by a combination of persistent and mostly incremental improvements in software, combined with rapidly increasing hardware horsepower.
The curious thing about (3) is that you don't generally hear the term "AI" mentioned in conjunction with these accomplishments, or much at all, at least not outside the major labs  (Stanford's SAIL is still going strong and has always called itself an AI lab, and likewise for MIT's CSAIL, in a snazzy modern building no less).  Even though the current stage, stage 3 by the reckoning above, has provided us with a great deal of useful machinery which would have been called AI in previous times, it's relatively rare to hear engineers outside the field talking about AI as such.  In the early 80s (when I was starting out), you'd hear it quite a bit.


Why isn't, say, a phone that can understand voice commands called AI today? One can plausibly blame fashion. The general public typically sees new technology via its marketing. Most marketing terms have a limited shelf life and "AI" as a marketing term went stale a long, long while back. To compound the matter, the term "AI" is still poisoned by the ugliness of stage 2.

While there is almost certainly something to that theory, I think there's another, more subtle factor at play. On a certain level, AI never meant neural networks, automated proof systems or even speech-enabled phones. It meant exactly what Turing said it meant back in 1950: Artificial Human intelligence -- something that thinks so much like a human that you can't tell it from the real thing. Even sci-fi supercomputers have generally been expected to think like us, only better and faster.

A neural network mining some pile of data, or even a chess program, or voice-enabled phone, is not acting particularly human, though one could argue that the phone comes close in its limited world. Likewise, there are industrial robots all over the place, but none of them looks like it stepped out of I, Robot.

AI under Turing's definition is not a particularly prominent part of the actual research, most likely because people are already good at being people. We tend to use computers for things people aren't good at -- performing massive calculations errorlessly, remembering huge amounts of information, doing repetitive tasks ... those sorts of things. As part of that, it's good if computers relate well to humans -- understanding our languages, adhering to our social conventions and so forth -- and while that's also an active area of research, it's not absolutely necessary or even particularly prominent.

As a result, we have an awful lot of good research and engineering and useful applications, useful enough that we use them even when they're frustratingly imperfect, but we don't have Robbie the Robot or Star Trek's omniscient Computer. If there's a failure here, it's not of engineering, but of imagination. It turns out it's at least as useful if our creations don't think like us.

No comments:

Post a Comment