Wednesday, June 20, 2012

What is, or isn't a theory of mind? Part 1: Objects

One notion of self-awareness revolves around the notion of a theory of mind, that is, a mental model of mental models.

Strictly speaking, having a theory of mind doesn't imply self-awareness.  I could have a very elaborate concept of others' mental states without being aware of my own.  This would go beyond normal human frailty like my not being aware of why I did some particular thing or forgetting how others might be affected by my actions.  It would mean being able to make inferences like "he likes ice cream so there will probably be a brownie left" without being aware that I like things.  That seems unlikely, but neurology is a wide and varied landscape.  There may well be people with just such a condition.

This is clearly brain-stretching stuff, so let's try to ease into it.  In this post, I want to start with a smaller, related problem:  What would a theory of objects look like, and how could you tell if something has it?  What we're trying to describe here is some sort of notion that the world contains discrete objects, which have a definite location and boundaries and which we can generally interact with, for example causing them to move.  This leaves room for things that aren't discrete objects, like sunshine or happiness or time, but it does cover a lot of interesting territory.

Not every living thing seems to have such a theory of objects.  A moth flying toward a light probably doesn't have any well-developed understanding of what the light is.  Rather, it is capable of flying in a given direction, and some part of its nervous system associates "more light in that direction" with "move in that direction".  It's the stimulus of the light, not the object producing the light, that the moth is responding to.  In other words, this is a simple stimulus-response interaction.

On the other hand, a theory of objects is not some deep unique property of the human mind.  A dog chasing a frisbee clearly does not treat the frisbee as an oval blob of color.  It treats it as a discrete, moving object with a definite position and trajectory in three dimensions.

You might fool the dog for a moment by pretending to throw the frisbee but hanging onto it instead, but the dog will generally abandon the chase for the not-flying disc in short order and retarget itself on the real thing.  It can recognize discs of different shapes and sizes and react to them as things to be thrown and caught.  It's hard to imagine a creature doing such a thing without some abstract mental representation of the disc -- and of you for that matter.

Likewise a bird or a squirrel stashing food for the winter and recovering it months later must have some representation of places and, if not objects, a "food-having" attribute to apply to those places.  That they are able to pick individual nuts and take them to those places also implies some sort of capability beyond reacting to raw sense data.

(but on the other hand ... ants are able to move objects from place to place, bees are able to locate and return to flowers ... my fallback here and elsewhere is that rather than one single thing we can call "theory of object" there must be many different object-handling facilities, some more elaborate than others ... and dogs and people have more elaborate facilities than do insects).

I've been careful in the last few paragraphs to use terms like "facility" and "representation" instead of "concept" or "idea" or such.  I'm generally fine with more loaded terms, which suggest something like thought as we know it, but just there I was trying to take a particularly mechanistic view.

So what sort of experiment could we conduct to determine whether something has a theory of objects, as opposed to just reacting to particular patterns of light, sound and so forth?  We are looking for situations where an animal with a theory of objects would behave differently from one without.

One key property of objects is that they can persist even when we can't sense them.  Technically, this goes under the name of object permanence.  For example, if I set up a screen, stand to one side of it and throw a frisbee behind it, I won't be surprised if a dog heads for the other side of the screen in anticipation of the frisbee reappearing from behind it.  Surely that demonstrates that the dog has a concept of the frisbee as an object.

Well, not quite.  Maybe the dog just instinctively reacts to a moving blob of color and continues to move in that direction until something else captures its attention.  Ok then, what if the frisbee doesn't come out from behind the screen?  Perhaps I've placed a soft net behind the screen that catches the frisbee soundlessly.  If the dog soon abandons its chase and goes off to do something else, we can't tell much.  But if it immediately rushes behind the screen, that's certainly suggestive.

However ... one can continue to play devil's advocate here.  After all, the two scenes, of the frisbee emerging or staying hidden, necessarily look different.  In one case there is a moving blob of color -- causing the dog to move -- followed by another blob of moving color.  In the other, there is no second movement.  So perhaps the hard-wiring is something like "Move in the direction of a moving blob of color.  If it disappears for X amount of time, move back toward the source."  That wouldn't quite explain why the dog might go behind the screen, but with a bit more thought we can probably explain that away.

What we need in order to really put the matter to rest is a combinatorial explosion.  A combinatorial explosion occurs when a few basic pieces can produce a huge number of combinations.  For example, a single die can show any of 6 numbers, two dice can show 36 combinations, three can show 216, four can show 1296 and so forth.  As the number of dice grows, it quickly becomes impractical to keep track of all the possible combinations separately.

If something, for whatever reason, reacts to combinations of eight dice that total less than 10 one way and those that total 10 or more a different way, it's hard to argue that it's simply reacting to the 9 particular combinations (all ones, eight different ways to get a two and seven ones) that total less than ten one way and the other 1,679,607 the other way.  Rather, the simplest explanation is that it has some concept of number.  On the other hand, if we're only experimenting with a single die, and a one gets a different reaction from the other numbers, it might well be that a lone circle has some sort of special status.

In the case of the frisbee and screen experiment, we might add more screens and have grad students stand behind them and randomly catch the frisbee and throw it back the other way.  If there are, say, five screens and the dog can follow the frisbee from the starting position to screen four, back to screen two and finally out the far side, and can consistently follow randomly chosen paths of similar complexity, we might as well accept the obvious:  A dog knows what a frisbee is.

Why not just accept the obvious to begin with?  Because not all obvious cases are so obvious.  When we get into borderline cases, our intuition becomes unreliable.  Different people can have completely different rock-solid intuitions and the only way to sort it out is to run an experiment that can distinguish the two cases.


This is where we are with primate psychology and theories of mind.  It's pretty clear that chimps (and here I really mean chimps and/or bonobos), for example, have much of the same cognitive machinery we do, including not only a theory of objects and some ability to plan, but also such things as an understanding of social hierarchies and kinship relations.

On the other hand, attempts to teach chimps human language have been fairly unconvincing.  It's clear that they can learn vocabulary.  This is notable, even though understanding of vocabulary is not unique to primates.  There are dogs, for example, that can reliably retrieve any of dozens of objects from a different room by name.

There has been much less success, however, with understanding sentences with non-trivial syntax, on the order of "Get me the red ball from the blue box under the table" when there is also, say, a red box with a blue ball in it on the table.  Clearly chimps have some concept of containment, and color, and spatial relationships, but that doesn't seem to carry through to their language facility such as it is.

So what facilities do we and don't we have in common?  In particular, do our primate cousins have some sort of theory of mind?

That brings us back to the original question of what constitutes a theory of mind, and the further question of what experiments could demonstrate its presence or absence.

People who work closely with chimps are generally convinced that they can form some concept of what their human companions are thinking and can adjust their behavior accordingly.  However, we humans are strongly biased toward attributing mental states to anything that behaves enough like it has them -- we're prone to assuming things (including ourselves, some might say) are smarter than they are.

Direct interaction in a naturalistic setting is valuable, and most likely better for the chimp subjects, but working with a chimp that has every appearance of understanding what you're up to doesn't necessarily rule out more simplistic explanations.  For example, if the experimenter looks toward something and the ape looks in the same direction, did it do so because it reasoned that the experimenter was intentionally looking that direction and therefore there must be something of interest there, or simply out of some instinct to follow the gaze of other creatures?

These are thornier questions, with quite a bit of research and debate accumulated around them over the past several decades.  I want to say more about them, though not necessarily in the next post.  I'm still reading up at the moment.

[I ended up continuing in this post]

No comments:

Post a Comment