Wednesday, August 18, 2010

What was AI?

I've previously claimed that, in very broad strokes, Artificial Intelligence has progressed through three stages:
  1. Early breathless predictions (not necessarily by those doing the research) of superhumanly intelligent systems Just Around the Corner.
  2. Harsh reality and a comprehensive round of debunking and disillusionment. Actual research continues anyway.
  3. (The present) All the hard work in stage (2) begins to bear fruit. Respectably hard problems are solved by a combination of persistent and mostly incremental improvements in software, combined with rapidly increasing hardware horsepower.
The curious thing about (3) is that you don't generally hear the term "AI" mentioned in conjunction with these accomplishments, or much at all, at least not outside the major labs  (Stanford's SAIL is still going strong and has always called itself an AI lab, and likewise for MIT's CSAIL, in a snazzy modern building no less).  Even though the current stage, stage 3 by the reckoning above, has provided us with a great deal of useful machinery which would have been called AI in previous times, it's relatively rare to hear engineers outside the field talking about AI as such.  In the early 80s (when I was starting out), you'd hear it quite a bit.


Why isn't, say, a phone that can understand voice commands called AI today? One can plausibly blame fashion. The general public typically sees new technology via its marketing. Most marketing terms have a limited shelf life and "AI" as a marketing term went stale a long, long while back. To compound the matter, the term "AI" is still poisoned by the ugliness of stage 2.

While there is almost certainly something to that theory, I think there's another, more subtle factor at play. On a certain level, AI never meant neural networks, automated proof systems or even speech-enabled phones. It meant exactly what Turing said it meant back in 1950: Artificial Human intelligence -- something that thinks so much like a human that you can't tell it from the real thing. Even sci-fi supercomputers have generally been expected to think like us, only better and faster.

A neural network mining some pile of data, or even a chess program, or voice-enabled phone, is not acting particularly human, though one could argue that the phone comes close in its limited world. Likewise, there are industrial robots all over the place, but none of them looks like it stepped out of I, Robot.

AI under Turing's definition is not a particularly prominent part of the actual research, most likely because people are already good at being people. We tend to use computers for things people aren't good at -- performing massive calculations errorlessly, remembering huge amounts of information, doing repetitive tasks ... those sorts of things. As part of that, it's good if computers relate well to humans -- understanding our languages, adhering to our social conventions and so forth -- and while that's also an active area of research, it's not absolutely necessary or even particularly prominent.

As a result, we have an awful lot of good research and engineering and useful applications, useful enough that we use them even when they're frustratingly imperfect, but we don't have Robbie the Robot or Star Trek's omniscient Computer. If there's a failure here, it's not of engineering, but of imagination. It turns out it's at least as useful if our creations don't think like us.

Tuesday, August 17, 2010

Because "The Web" just wasn't a broad enough topic ...

Well, the title pretty much says it.

After 500 or so posts of Field Notes on the Web, I've decided to relax and stretch out a bit. As I said at the time, Field Notes isn't going away, but the self-imposed ten-post-a-month quota has, leaving more time free for other pursuits such as ... um ... blogging.

Since the whole point of the exercise is to relax a bit, there will be no quota here and the topic will be whatever I feel like at the moment. In other words, it'll be a more or less bog-standard blog.

That said, I expect to stick to non-fiction, particularly commentaries, half-baked analyses and random speculations, roughly on the order of Field Notes but not about the web (if it is about the web, it'll end up on the original blog, of course). I also hope to keep to topics on which there isn't an obvious surplus of opinion in the blogosphere. Better to be a big fish, or perhaps more aptly the only fish, in a small pond.

If you're still with me after all that, welcome aboard! We may not get very far very fast, but I hope at least it'll be a pleasant excursion.