Sunday, March 1, 2020

Intelligence and intelligence

I've been meaning for quite a while to come back to the hide-and-seek AI demo, but while mulling that over I realized something about a distinction I'd made in the first post.  I wanted to mention that brief(-ish-)ly in its own post, since it's not directly related to what I wanted to say about the demo itself.

In the original post, I distinguished between internal notions of intelligence, concerning what processes are behind the observed behavior, and external notions which focus on the behavior itself (note to self: find out what terms actual cogsci/AI researchers use -- or maybe structural and functional would be better?).

Internal definitions on the order of "Something is intelligent if it's capable of learning and dealing with abstract concepts" seem satisfying, even self-evident, until you try to pin down exactly what is meant by "learning" or "abstract concept".  External definitions are, by construction, more objective and measurable, but often seem to call things "intelligent" that we would prefer not to call intelligent at all, or call intelligent in a very limited sense.

The classic example would be chess (transcribing speech and recognizing faces would be others).  For quite a while humans could beat computers at chess, even though even early computers could calculate many more positions than a human, and the assumption was that humans had something -- abstract reasoning, planning, pattern recognition, whatever -- that computers did not have and might never have.  Therefore, humans would always win until computers could reason abstractly, plan, recognize patterns or whatever else it was that only humans could do. In other words, chess clearly required "real intelligence".

Then Deep Blue beat Kasparov through sheer calculation, playing a "positional" style that only humans were supposed to be able to play.  Clearly a machine could beat even the best human players at chess without having anything one could remotely call "learning" or "abstract concepts".  As a corollary, top-notch chess-playing is not a behavior that can be used to define the kind of intelligence we're really interested in.

This is true even with the advent of Alpha Zero and similar neural-network driven engines*. Even if we say, for the sake of the argument, that neural networks are intelligent like we are, the original point still holds.  Things that are clearly unintelligent can play top-notch chess, so "plays top-notch chess" does not imply "intelligent like we are".  If neural networks are intelligent like we are, it won't be because they can play chess, but for other reasons.

The hide-and-seek demo is exciting because on the one hand, it's entirely behavior based.  The agents are trained on the very simple criterion of whether any hiders are visible to the seekers.  On the other hand, though, the agents can develop capabilities, particularly object permanence, that have been recognized as hallmarks of intelligence since before there were computers (there's a longer discussion behind this, which is exactly what I want to get to in the next post on the topic).

In other words, we have a nice, objective external definition that matches up well with internal definitions.  Something that can
  • Start with only basic knowledge and capabilities (in this case some simple rules about movement and objects in the simulated environment)
  • Develop new behaviors in a competition against agents with the same capabilities
is pretty clearly intelligent in some meaningful sense, even if it doesn't seem as intelligent as us.

If we want to be more precise about "develop new behaviors", we could either single out particular behaviors, like fort building or ramp jumping, or just require that any new agent we're trying to test starts out by losing heavily to the best agents from this demo but learns to beat them, or at least play competitively.

This says nothing about what mechanisms such an agent is using, or how it learns.  This means we might some day run into a situation like chess where something beats the game without actually appearing intelligent in any other sense, maybe some future quantum computer that can simultaneously try out all a huge variety of possible strategies.  Even then, we would learn something interesting.

For now, though, the hide-and-seek demo seems like a significant step forward, both in defining what intelligence might be and in producing it artificially.



* I've discussed Alpha Zero and chess engines in general at length elsewhere in this blog.  My current take is that the ability of neural networks to play moves that appear "creative" to us and to beat purely calculation based (AB) engines doesn't imply intelligence, and that the ability to learn the game from nothing, while impressive, doesn't imply anything like what we think of as human intelligence, even though it's been applied to a number of different abstract games.  That isn't a statement about neural networks in general, just about these particular networks being applied to the specific problem of chess and chess-like games.  There's a lot of interesting work yet to be done with neural networks in general.

No comments:

Post a Comment