Sunday, September 22, 2024

Experiences, mechanisms, behaviors and LLMs

This is another post that sat in the attic for a few years.  It overlaps a bit with some later posts, but I thought it was still worth dusting off and publishing.  By "dusting off", I mean "re-reading, trying to edit, and then rewriting nearly everything but the first few paragraphs from scratch, making somewhat different points."


Here are some similar-looking questions:
  • Someone writes an application that can successfully answer questions about the content of a story it's given.  Does it understand the story?
  • Other primates can watch each other, pick up cues such as where the other party is looking, and react accordingly.  Do they have a "theory of mind", that is, some sort of mental model of what the other is thinking, or are they just reacting directly to where the other party is looking and other superficial clues (see this previous post for more detail)?
  • How can we tell if something, whether it's a person, another animal, an AI or something else,  is really conscious, that is, having conscious experiences as opposed to somehow unconsciously doing everything a conscious being would do?
  • In the case of the hide-and-seek machine learning agents (see this post and this one), do the agents have some sort of model of the world?
  • How can you tell if something, whether it's a baby human, another animal or an AI, has object permanence, that is, the ability to know that an object exists somewhere that it can't directly sense?
  • In the film Blade Runner, is Dekker a replicant?
These are all questions about how things, whether biological or not, understand and experience the world (the story that Blade Runner is based on says this more clearly in its title, Do Androids Dream of Electric Sheep?).  They also have a common theme of what you can now about something internally based on what you can observe about it externally.  That was originally going to be the main topic, but the previous post on memory covered most of the points I really wanted to make, although from a different angle.

In any case, even though the questions seem similar, some differences appear when you dig into and try to answer them.

The question of whether something is having conscious experiences, or just looks like it, also known as  the "philosophical zombie" problem, is different from the others in that it can't be answered objectively, because having conscious experiences is subjective by definition.   As to Dekker, well, isn't it obvious?

There are several ways to interpret the others, according to a distinction I've already made in a couple of other posts:
  • Does the maybe-understander experience the same things as we do when we feel we understand something (perhaps an "aha ... I get it now" sort of feeling).  As with the philosophical zombie problem, this is in the realm of philosophy, or at least it's unavoidably subjective.  Call this the question of experience.
  • Does the maybe-understander do the same things we do when understanding something (in some abstract sense).  For example, if we read a story that mentions "tears in rain", does the understander have something like memories of crying and of being in the rain, that it combines into an understanding of "tears in rain" (there's a lot we don't know about how people understand things, but it's probably roughly along those lines).  Call this the question of mechanism.
  • Does the maybe-understander behave similarly to how we do if we understand something?  For example, if we ask "What does it mean for something to be washed away like tears in rain?" can it give a sensible answer?  Call this the question of behavior.
The second interpretation may seem like the right one, but it has practical problems.  Rather than just knowing what something did, like how it answered a question, you have to be able to tell what internal machinery it has and how it uses it, which is difficult to do objectively (I go into this from a somewhat different direction in the previous post).

The third interpretation is much easier to answer rigorously and objectively, but, once you've decided on a set of test cases, what does a "yes" answer actually mean?  At the time of this writing, chatbots can give a decent answer to a question like the one about tears in rain, but it's also clear that they don't have any direct experience of tears, or rain.

Over the course of trying to understand AI in general, and the current generation in particular, I've at least been able to clarify my own thinking concerning experience, mechanism and behavior: It would be nice to be able to answer the question of experience, but that's not going to happen.  It's not even completely possible when it comes to other people, much less other animals or AIs, even if you take the commonsense position that other people do have the same sorts of experiences as you do.

You and I might look at the same image or read the same text and say similar things about it, but did you really experience understanding it the way I did?  How can I really know?  The best I can do is ask more questions, look for other external cues (did you wrinkle your forehead when I mentioned something that seemed very clear to me?) and try to form a conclusion as best I can.

Even understanding of individual words is subjective in this sense.  The classic question is whether I understand the word blue the same way you do.  Even if some sort of functional MRI can show that neurons are firing in the same general way in our brains when we encounter the word blue, what's to say I don't experience blueness in the same way you experience redness and vice versa?

The question of behavior is just the opposite.  It's knowable, but not necessarily satisfying.  The question of mechanism is somewhere in between.  It's somewhat knowable.  For example, the previous post talks about how memory in transformer-based models appears to be fundamentally different from our memory (and that of RNN-based models).  It's somewhat satisfying to know something more about how something works, in this case being able to say "transformers don't remember things the way we do".

Nonetheless, as I discussed in a different previous post, the problem of behavior is most relevant when it comes to figuring out the implications of having some particular form of AI in the real world.  There's a long history of attempts to reason "This AI doesn't have X, like we do, therefore it isn't generally intelligent like we are" or "If an AI has Y, like we do, it will be generally intelligent and there will be untold consequences", only to have an AI appear that people agree has Y but doesn't appear to be generally intelligent.  The latest Y appears to be "understanding of natural language".

But let's take a closer look at that understanding, from the point of view of behavior.  There are several levels of understanding natural language.  Some of them are:
  • Understanding of how words fit together in sentences.  This includes what's historically been called syntax or grammar, but also more subtle issues like how people say big, old, gray house rather than old, gray, big house 
  • Understanding the content of a text, for example being able to answer "yes" to Did the doctor go to the store? from a text like The doctor got up and had some breakfast.  Later, she went to the store.  Questions like these don't require any detailed understanding of what words actually mean. 
  • Understanding meaning that's not directly in a text.  If the text is The doctor went to the store, but the store was closed.  What day was it?  The doctor remembered that the regular Wednesday staff meeting was yesterday.  There was a sign on the door: Open Sun - Wed 10 to 6, Sat noon to 6, then to correctly answer Did the doctor go to the store with something like Yes, but it was Thursday and the store was closed, rather than a simple yes without further explanation.
From a human point of view, the stories in the second and third bullet points may seem like the same story in different words, but from an AI point of view one is much harder than the other. But current chatbots can do all three of these, so from a behavioral point of view it's hard to argue that they don't understand text, even though they clearly don't use the same mechanisms.

This is a fairly recent development.  The earlier draft of this post noted that chatbots at the time might do fine for a prompt that required knowing that Thursday comes after Wednesday but completely fail on the same prompt using Sunday and Monday.  Current models do much better with this sort of thing, so in some sense they know more and understand better than the ones from 2019, even if it's not clear what the impact of this has been in the world at large.

Chatbots don't have direct experience of the physical world or social conventions.  What they do have is the ability to process text about experiences in the physical world and social conventions.  One way of looking at a chatbot is as a simulation of "what would the internet say about this?" or, a bit more precisely, "based on the contents of the training text, what text would be generated in response to the prompt given?"  Since that text was written (largely) by people with experiences of the physical world and social conventions, a good simulation will produce results similar to those of a person.

From the point of view of behavior, this is interesting.  An LLM is capturing something about the training text that enables behavior that we would attribute to understanding.

It might be interesting to combine a text-based chatbot that can access textual information about the real world with a robot actually embedded in the physical world, and I think there have been experiments along those lines.  A robot understands the physical world in the sense of being able to perceive things and interact with them physically.  In what sense would the combined chatbot/robot system understand the physical world?

From the point of view of mechanism, there are obvious objections to the idea that chatbots understand the text they're processing.  In my view, these are valid, but how relevant they are depends on your perspective.  Let's look at a couple of possible objections.

It's just manipulating text.  This hearkens back to early programs like ELIZA, which manipulated text in very obvious ways, like responding to I feel happy with Why do you feel happy? because the program will respond to I feel X with Why do you feel X? regardless of what X is.  While the author of ELIZA never pretended it was understanding anything, it very much gave the appearance of understanding if you were willing to believe it could understand to begin with, something many people, including the author, found deeply unsettling.

On the one hand, it's literally true that an LLM-based chatbot is just manipulating text.  On the other hand, it's doing so in a far from obvious way.  Unlike ELIZA, an LLM is able to encode, one way or another, something about how language is structured, facts like "Thursday comes after Wednesday" and implications like "if a store's hours say it's open on some days, then it's closed on the others" (an example of "the exception proves the rule" in the original sense -- sorry, couldn't help it).

As the processing becomes more sophisticated, the just in It's just manipulating text  does more and more work.  At the present state of the art, a more accurate statement might be It's manipulating text in a way that captures something meaningful about its contents.

It's just doing calculations: Again, this is literally true.  At the core of a current LLM is a whole lot of tensor-smashing, basically multiplying and adding numbers according to a small set of well-defined rules, quadrillions of times (the basic unit of computing power for the chips that are used is the teraflop, or trillion floating-point arithmetic operations per second, single chips can do hundreds of teraflops and there may be many such chips involved in answering a particular query).

But again, that just is doing an awful lot of work.  Fundamentally, computers do two things
  • They perform basic calculations, such as addition, multiplication and various logical operations, on blocks of bits
  • They copy data from one location to another, based on the contents of blocks of bits
That second bullet point includes both conditional logic (since the instruction pointer is one place to put data) and the "pointer chasing" that together underlie a large swath of current software and were particularly important in early AI efforts.  While neural net models do a bit of that, the vast bulk of what they do is brute calculation.  If anything, they're the least computer science-y and most calculation-heavy AIs.

Nonetheless, all that calculation is driving something much more subtle, namely simulating the behavior of a network of idealized neurons, which collectively behave in a way we only partly understand.  If an app for, say, calculating the price of replacing a deck or patio does a calculation, we can follow along with it and convince ourselves that the calculation is correct.  When a pile of GPUs cranks out the result of running a particular stream of input through a transformer-based model, we can make educated guesses as to what it's doing, but at in many contexts the best description is "it does what it does".

In other words, it's just doing calculations may look the same as it's just doing something simple, but that's not really right.  It's doing lots and lots and lots of simple things on far too much data for a human brain to understand directly.

All of this is just another way to say that while the question of mechanism is interesting, and we might even learn interesting things about our own mental mechanisms by studying it, it's not particularly helpful in figuring out what to actually do regarding the current generation of AIs.

No comments:

Post a Comment