Wednesday, September 25, 2024

Amplified intelligence, or what makes a computer a computer?

I actually cut two chunks out of What would superhuman intelligence even mean?.  I think the one that turned into this post is the more interesting of the two, but this one's short and I didn't want to discard it entirely.


Two very clear cases of amplified human intelligence are thousands of years old: writing and the abacus.  Both of them amplify human memory, long-term for writing and short-term for the abacus.  Is a person reading a clay tablet or calculating with an abacus some sort of superhuman combination of human and technology?  No?  Why not?

Calculating machines and pieces of writing are passive.  They don't do anything on their own.  They need a human, or something like a human, to have any effect.  Fair enough.  To qualify as superhuman by itself, a machine needs some degree of autonomy.

Autonomous machines are more recent than computing and memory aids.  The first water clocks were probably built two or three thousand years ago, and there is a long tradition in several parts of the world of building things that, given some source of power, will perform a sequence of actions on their own without any external guidance.

But automata like clocks and music boxes are built to perform a particular sequence of actions from start to finish, though some provide a way to change the program between performances.  Many music boxes use some sort of drum that encodes the notes of the tune and can be swapped out to play a different tune, for example.  Nevertheless, once the automaton starts its performance, it's going to perform whatever it's been set up to perform.

There's one more missing piece: The ability to react to the external world, to do one thing based on one stimulus and a different thing based on a different stimulus, that is, to perform conditional actions.  Combine this with some sort of updatable memory and you have the ability to perform different behavior based on something that happened in the past, or even multiple things that happened at different points in the past.

My guess is that both of those pieces are also older than we might think, but the particular combination of  conditional logic and memory that they use is the real difference between the modern computers that first appeared in the mid twentieth century and the automata of the centuries before.

AGI, goals and influence

While putting together What would superhuman intelligence even mean? I took out a few paragraphs that seemed redundant at the time.  While I think that post is better for the edit, when I re-read the deleted material, I realized that there was one point in them that I didn't explicitly make in the finished post.  Here's the argument (If you have "chess engine" on your AI-post bingo card, yep, you can mark it off yet again. I really do think it's an apt example, but I'm even getting tired of mentioning it):


When it comes to question what are the implications of AGI?, actual intelligence is one factor among many.  A superhuman chess engine poses little if any risk.  A simple non-linear control system that can behave chaotically is a major risk if it's controlling something dangerous.

To the extent that a control system with some sort of general superintelligence is hard to predict and may make decisions that don't align with our priorities, it would be foolhardy to put it directly in charge of something dangerous.  Someone might do that anyway, but that's a hazard of our imperfect human judgment.  A superhuman AI is just one more dangerous thing that humans have the potential to misuse.

The more interesting risk is that an AI with limited control of something innocuous could leverage that into more and more control, maybe through extortion -- give the system control of the power plants or it will destroy all the banking data -- or persuasion -- someone hooks a system up to social media where its accounts convince people in power to put it in charge of the power plants.

These are worthy scenarios to contemplate.  History is full of examples of human intelligences extorting or persuading people to do horribly destructive things, so why would an AGI be any different? Nonetheless, in my personal estimation, we're still quite a ways from this actually happening.

Current LLMs can sound persuasive if you don't fact-check them and don't let them go on long enough to say something dumb -- which in my experience is not very long -- but what would a chatbot ask for?  Whom would it ask? How would the person or persons carry out its instructions?  (I initially said "its will", rather than "its instructions", but there's nothing at all to indicate that a chatbot has anything resembling will)

You could imagine some sort of goal-directed agent using a chatbot to generate persuasive arguments on its behalf, but, at least as it stands, I'd say the most likely goal-directed agent for this would a human being using a chatbot to generate a convincing web of deception.  But human beings are already highly skilled at conning other human beings.  It's not clear what new risk generative AI presents here.

Certainly, an autonomous general AI won't trigger a cataclysm in the real world if it doesn't exist, so in that sense, the world is safer without it.  Eventually, though, the odds are good that something will come along that meets DeepMind's definition of AGI (or ASI).  Will that AI's skills include parlaying whatever small amount of influence it starts with into something more dangerous?  Will its goals include expanding its influence, even if we don't think they do at first?

The idea of an AI with seemingly harmless goals becoming an existential threat to humanity is a staple in fiction (and the occasional computer game).  It's good that people have been exploring it, but it's not clear what conclusions to draw from those explorations, beyond a general agreement that existential threats to humanity are bad.  Personally, I'm not worried yet, at least not about AGI itself, but I've been wrong many times before.

Sunday, September 22, 2024

Experiences, mechanisms, behaviors and LLMs

This is another post that sat in the attic for a few years.  It overlaps a bit with some later posts, but I thought it was still worth dusting off and publishing.  By "dusting off", I mean "re-reading, trying to edit, and then rewriting nearly everything but the first few paragraphs from scratch, making somewhat different points."


Here are some similar-looking questions:
  • Someone writes an application that can successfully answer questions about the content of a story it's given.  Does it understand the story?
  • Other primates can watch each other, pick up cues such as where the other party is looking, and react accordingly.  Do they have a "theory of mind", that is, some sort of mental model of what the other is thinking, or are they just reacting directly to where the other party is looking and other superficial clues (see this previous post for more detail)?
  • How can we tell if something, whether it's a person, another animal, an AI or something else,  is really conscious, that is, having conscious experiences as opposed to somehow unconsciously doing everything a conscious being would do?
  • In the case of the hide-and-seek machine learning agents (see this post and this one), do the agents have some sort of model of the world?
  • How can you tell if something, whether it's a baby human, another animal or an AI, has object permanence, that is, the ability to know that an object exists somewhere that it can't directly sense?
  • In the film Blade Runner, is Dekker a replicant?
These are all questions about how things, whether biological or not, understand and experience the world (the story that Blade Runner is based on says this more clearly in its title, Do Androids Dream of Electric Sheep?).  They also have a common theme of what you can now about something internally based on what you can observe about it externally.  That was originally going to be the main topic, but the previous post on memory covered most of the points I really wanted to make, although from a different angle.

In any case, even though the questions seem similar, some differences appear when you dig into and try to answer them.

The question of whether something is having conscious experiences, or just looks like it, also known as  the "philosophical zombie" problem, is different from the others in that it can't be answered objectively, because having conscious experiences is subjective by definition.   As to Dekker, well, isn't it obvious?

There are several ways to interpret the others, according to a distinction I've already made in a couple of other posts:
  • Does the maybe-understander experience the same things as we do when we feel we understand something (perhaps an "aha ... I get it now" sort of feeling).  As with the philosophical zombie problem, this is in the realm of philosophy, or at least it's unavoidably subjective.  Call this the question of experience.
  • Does the maybe-understander do the same things we do when understanding something (in some abstract sense).  For example, if we read a story that mentions "tears in rain", does the understander have something like memories of crying and of being in the rain, that it combines into an understanding of "tears in rain" (there's a lot we don't know about how people understand things, but it's probably roughly along those lines).  Call this the question of mechanism.
  • Does the maybe-understander behave similarly to how we do if we understand something?  For example, if we ask "What does it mean for something to be washed away like tears in rain?" can it give a sensible answer?  Call this the question of behavior.
The second interpretation may seem like the right one, but it has practical problems.  Rather than just knowing what something did, like how it answered a question, you have to be able to tell what internal machinery it has and how it uses it, which is difficult to do objectively (I go into this from a somewhat different direction in the previous post).

The third interpretation is much easier to answer rigorously and objectively, but, once you've decided on a set of test cases, what does a "yes" answer actually mean?  At the time of this writing, chatbots can give a decent answer to a question like the one about tears in rain, but it's also clear that they don't have any direct experience of tears, or rain.

Over the course of trying to understand AI in general, and the current generation in particular, I've at least been able to clarify my own thinking concerning experience, mechanism and behavior: It would be nice to be able to answer the question of experience, but that's not going to happen.  It's not even completely possible when it comes to other people, much less other animals or AIs, even if you take the commonsense position that other people do have the same sorts of experiences as you do.

You and I might look at the same image or read the same text and say similar things about it, but did you really experience understanding it the way I did?  How can I really know?  The best I can do is ask more questions, look for other external cues (did you wrinkle your forehead when I mentioned something that seemed very clear to me?) and try to form a conclusion as best I can.

Even understanding of individual words is subjective in this sense.  The classic question is whether I understand the word blue the same way you do.  Even if some sort of functional MRI can show that neurons are firing in the same general way in our brains when we encounter the word blue, what's to say I don't experience blueness in the same way you experience redness and vice versa?

The question of behavior is just the opposite.  It's knowable, but not necessarily satisfying.  The question of mechanism is somewhere in between.  It's somewhat knowable.  For example, the previous post talks about how memory in transformer-based models appears to be fundamentally different from our memory (and that of RNN-based models).  It's somewhat satisfying to know something more about how something works, in this case being able to say "transformers don't remember things the way we do".

Nonetheless, as I discussed in a different previous post, the problem of behavior is most relevant when it comes to figuring out the implications of having some particular form of AI in the real world.  There's a long history of attempts to reason "This AI doesn't have X, like we do, therefore it isn't generally intelligent like we are" or "If an AI has Y, like we do, it will be generally intelligent and there will be untold consequences", only to have an AI appear that people agree has Y but doesn't appear to be generally intelligent.  The latest Y appears to be "understanding of natural language".

But let's take a closer look at that understanding, from the point of view of behavior.  There are several levels of understanding natural language.  Some of them are:
  • Understanding of how words fit together in sentences.  This includes what's historically been called syntax or grammar, but also more subtle issues like how people say big, old, gray house rather than old, gray, big house 
  • Understanding the content of a text, for example being able to answer "yes" to Did the doctor go to the store? from a text like The doctor got up and had some breakfast.  Later, she went to the store.  Questions like these don't require any detailed understanding of what words actually mean. 
  • Understanding meaning that's not directly in a text.  If the text is The doctor went to the store, but the store was closed.  What day was it?  The doctor remembered that the regular Wednesday staff meeting was yesterday.  There was a sign on the door: Open Sun - Wed 10 to 6, Sat noon to 6, then to correctly answer Did the doctor go to the store with something like Yes, but it was Thursday and the store was closed, rather than a simple yes without further explanation.
From a human point of view, the stories in the second and third bullet points may seem like the same story in different words, but from an AI point of view one is much harder than the other. But current chatbots can do all three of these, so from a behavioral point of view it's hard to argue that they don't understand text, even though they clearly don't use the same mechanisms.

This is a fairly recent development.  The earlier draft of this post noted that chatbots at the time might do fine for a prompt that required knowing that Thursday comes after Wednesday but completely fail on the same prompt using Sunday and Monday.  Current models do much better with this sort of thing, so in some sense they know more and understand better than the ones from 2019, even if it's not clear what the impact of this has been in the world at large.

Chatbots don't have direct experience of the physical world or social conventions.  What they do have is the ability to process text about experiences in the physical world and social conventions.  One way of looking at a chatbot is as a simulation of "what would the internet say about this?" or, a bit more precisely, "based on the contents of the training text, what text would be generated in response to the prompt given?"  Since that text was written (largely) by people with experiences of the physical world and social conventions, a good simulation will produce results similar to those of a person.

From the point of view of behavior, this is interesting.  An LLM is capturing something about the training text that enables behavior that we would attribute to understanding.

It might be interesting to combine a text-based chatbot that can access textual information about the real world with a robot actually embedded in the physical world, and I think there have been experiments along those lines.  A robot understands the physical world in the sense of being able to perceive things and interact with them physically.  In what sense would the combined chatbot/robot system understand the physical world?

From the point of view of mechanism, there are obvious objections to the idea that chatbots understand the text they're processing.  In my view, these are valid, but how relevant they are depends on your perspective.  Let's look at a couple of possible objections.

It's just manipulating text.  This hearkens back to early programs like ELIZA, which manipulated text in very obvious ways, like responding to I feel happy with Why do you feel happy? because the program will respond to I feel X with Why do you feel X? regardless of what X is.  While the author of ELIZA never pretended it was understanding anything, it very much gave the appearance of understanding if you were willing to believe it could understand to begin with, something many people, including the author, found deeply unsettling.

On the one hand, it's literally true that an LLM-based chatbot is just manipulating text.  On the other hand, it's doing so in a far from obvious way.  Unlike ELIZA, an LLM is able to encode, one way or another, something about how language is structured, facts like "Thursday comes after Wednesday" and implications like "if a store's hours say it's open on some days, then it's closed on the others" (an example of "the exception proves the rule" in the original sense -- sorry, couldn't help it).

As the processing becomes more sophisticated, the just in It's just manipulating text  does more and more work.  At the present state of the art, a more accurate statement might be It's manipulating text in a way that captures something meaningful about its contents.

It's just doing calculations: Again, this is literally true.  At the core of a current LLM is a whole lot of tensor-smashing, basically multiplying and adding numbers according to a small set of well-defined rules, quadrillions of times (the basic unit of computing power for the chips that are used is the teraflop, or trillion floating-point arithmetic operations per second, single chips can do hundreds of teraflops and there may be many such chips involved in answering a particular query).

But again, that just is doing an awful lot of work.  Fundamentally, computers do two things
  • They perform basic calculations, such as addition, multiplication and various logical operations, on blocks of bits
  • They copy data from one location to another, based on the contents of blocks of bits
That second bullet point includes both conditional logic (since the instruction pointer is one place to put data) and the "pointer chasing" that together underlie a large swath of current software and were particularly important in early AI efforts.  While neural net models do a bit of that, the vast bulk of what they do is brute calculation.  If anything, they're the least computer science-y and most calculation-heavy AIs.

Nonetheless, all that calculation is driving something much more subtle, namely simulating the behavior of a network of idealized neurons, which collectively behave in a way we only partly understand.  If an app for, say, calculating the price of replacing a deck or patio does a calculation, we can follow along with it and convince ourselves that the calculation is correct.  When a pile of GPUs cranks out the result of running a particular stream of input through a transformer-based model, we can make educated guesses as to what it's doing, but at in many contexts the best description is "it does what it does".

In other words, it's just doing calculations may look the same as it's just doing something simple, but that's not really right.  It's doing lots and lots and lots of simple things on far too much data for a human brain to understand directly.

All of this is just another way to say that while the question of mechanism is interesting, and we might even learn interesting things about our own mental mechanisms by studying it, it's not particularly helpful in figuring out what to actually do regarding the current generation of AIs.

Tuesday, September 10, 2024

Tying up a few loose ends about models and memory

Most of the time when I write a post, I finish it up before going on to the next one.  Sometimes I'll keep a draft around if something else comes up before I have something that feels ready to publish, and sometimes weeks or even months can pass between then and actually publishing, but I still prefer to publish the current post before starting a new one.

However, a while ago ... nearly five years ago, it looks like ... I ran across an article on a demo by Open AI that played games of hide-and-seek in virtual environments.  Over the course of hundreds of millions of games, the hiders and seekers developed strategies and counter-strategies, including some that the authors of the article called "tool use".

I've put that in "scare quotes" ("scare quotes" around "scare quotes" because what's scary about noting that it was someone else who said something?), but I don't really have a problem with calling something like moving objects around in a world, real or virtual, to get an advantage "tool use" (those are use/mention quotes, if anyone's keeping score).

As usual, though, I'm more interested in the implications than the terminology, and this seemed like another example of trying to extrapolate from "sure, we can use terms like tool use and planning here with a straight face" to "AI systems are about to develop whatever it is we think is special about our intelligence, which means they might be about to take over the world."

Writing that brought a thought to mind that I'm not sure I've really articulated before: To whatever extent we've taken over the world, it's taken us on the order of 70,000 years to get here, depending on how you count.  In that light, it seems a bit odd to conclude that anything else with intelligence similar to ours will be running the place overnight, especially if we know they're coming.

But I'm digressing from what was already a digression.  In the process of putting together several posts prompted by that article, and still being in that process when ChatGPT happened, I ended up pondering some questions that didn't quite make it into other posts, at least not in the form that they originally occurred to me.

So here we are:

First, I was most intrigued by the idea that the hide-and-seek agents seemed to have object permanence, that is, the ability to remember that something exists even when you can't see it or otherwise perceive it directly.

This is famously a milestone in human development.  As with many if not most cognitive abilities, understanding of object permanence has evolved over time, and there is no singular point at which babies normally "acquire object permanence" (call those whatever kind of quotes you like).

Newborn babies do not appear to have any kind of object permanence, but in their first year or two they pass several milestones, including what the Wikipedia article I linked to calls "Coordination of secondary circular reactions", which among other things means "the child is now able to retrieve an object when its concealment is observed" (straight-up "this is what the article said" quotes there, and I think I'll stop this game now).

The hide-and-seek agents seem to have similar abilities, particularly being able to return to the site of an object they've discovered or to track how many objects have been moved out of sight to the left versus to the right.  There are two interesting questions here:

  • Do the hide-and-seek agents have the same object permanence capabilities as humans?
  • Do the hide-and-seek agents have object permanence in the same way as humans?
I'm making the same distinction here that I have in previous posts.  The first question can be answered directly: Put together an experiment that requires remembering where objects were or which way they've gone and see if the agents perform similarly to humans.

The second is more difficult to answer, because it can't be answered directly.  Instead, we have to form a theory about how humans are able to track the existence of unseen objects, and then test whether that theory is consistent with what humans actually do, and then, once there is a way of testing whether someone or something has that particular mechanism, try the same tests on the hide-and-seek agents.  Assuming that all goes well, you still don't have an airtight case, but you have reason to believe that the agents are doing similar things to what humans do when demonstrating object permanence (in some particular set of senses).

There's actually a third question: Are the hide-and-seek agents experiencing objects and events in their world the same way we experience objects and events in our world?  I would call that a philosophical question, probably unknowable in some fundamental sense.  That's not to say that there's no point in exploring it, or exploring whether or not such things are knowable, just that at this point we're far outside the realm of verifiable experiments -- unless some clever philosopher is able to devise an experiment that will give us a meaningful answer.

The interesting part here is that we have a pretty good idea how agents such as the hide-and-seek agents are able to have capabilities like object permanence.  In broad strokes, a hide-and-seek agent is consuming a stream of inputs analogous to our own sensory inputs such as sight and sound.  In particular (quoting from the OpenAI blog post):
  • The agents can see objects in their line of sight and within a frontal cone.
  • The agents can sense distance to objects, walls, and other agents around them using a lidar-like sensor.
At any given time step, the agents are given a summary of what is visible at what distance at that time (rather than, say, getting an image and having to deduce from the pixels what objects are where), or at least I believe this is what the blog post means by "Agents use an entity-centric state-based representation of the world"  From this, each agent produces a stream of actions: move, grab an object or lock an object (which prevents other agents from moving it).

In between the stream of inputs and the actions taken at a particular timestep is a neural network which is trained to extract the important parts from the input stream and turn them into actions.  This neural network is trained based on the results of millions of simulated games of hide-and-seek, but it's static for any particular game.  In some sense, it's encoding a memory of what happened in all the games it's been trained on -- producing this particular stream of actions in response to this particular stream of input resulted in success or failure, times many millions -- but it's not encoding anything about the current game.

Just going by the blog post, I can't tell exactly what sort of memory the agents do have, but from the context of how transformer-based models work, it is a memory of the input stream, either from the beginning of the current game or over a certain window.  That is, at any particular timestep, the agent can not only use what it can sense at that time step, but also what it has sensed at previous time steps.

This makes object permanence a little less mysterious.  If an agent sensed a box dead ahead and ten units away, then it turned 90 degrees to the right and went three units forward, it's not too surprising for it to act as though there is now a box three units behind it and ten units to the left, given that it remembers both of those things happening.

The key here is "act as though".  In the same situation, a person would have some sort of mental image of a box in a particular location.  The only things that the hide-and-seek agent is explicitly remembering about the current game is what it's sensed so far.

Presumably, there is something in the neural net that turns "I saw a box at this distance" followed by "I moved in such-and-such a way" into a signal deeper in the net that in some sense means "there is a box at this location", in some sort of robust and general way so that it can encode box locations in general, not just any particular example.  Even deeper layers can then use this representation of the world to work out what kinds of actions will have the greatest chance of success.  This is probably not exactly what's going on but ... something along those lines.

Is it possible that humans do something similar when remembering locations of objects?  It's possible, but people don't always seem to have sequences of events in mind when remembering where objects are.  I can be helpful to remember things like "I came downstairs with my keys and then I was talking to you and I think I left the keys on the table", but it doesn't seem to be necessary.  If I tell you that I left the keys on the table in a room of a house you've never been to, you can still find the keys.  If all I remember is that I left the keys on the table, but I'm not exactly sure how that came to be, I can still find them.

In other words, we seem to form mental images of places and the objects in them.  While one way to form such an image is by experiencing moving through a place and observing objects in it, it's not the only way, and we can still access our mental map of places and things in them even after the original sequence of experiences is long forgotten.

We appear to remember things after doing significant processing and throwing away the input that led to the memories (or at least separating our memory of what happened from the memory of what's where).  The way that transformer-based models handle sequences of events is not only different from what we appear to do, it's deliberately different.

Bear in mind that I'm not an expert here.  I've done a bit of training on the basics of neural net-based ML and I've read up a bit on transformers and related architectures, so I think what follows is basically accurate, but I'm sure an actual expert would have some notes and corrections. 

One definition before we dive in: token is the general term for an item in a stream of input, representing something on the order of a word of text or the description of what an agent senses, after it's been boiled down to a vector of numbers by a procedure that varies depending on the particular kind of input.

The problem of attention -- how heavily to weight different tokens in a stream of input -- has been the subject of active research for decades.  Transformers handle this differently from other types of models.  The previous generation of models used Recurrent Neural Networks (RNNs) that did something more like maintaining short-term memory of what's going on.  Each input token is processed by a net to produce two sets of signals: output signals that say what to do at that particular point, and hidden state signals, that are fed back as inputs when processing the next input token.

In some sense, the hidden state signals signals represent the state of the model's memory at that point.  Giving a token extra attention means boosting its signal in the hidden state that will be used in processing the next token, and indirectly in processing the tokens after that.

This has two problems: First, because the inputs to the net depend on the hidden state outputs from previous tokens, you have to compute one token at a time, which means you can't just throw more hardware at processing more tokens.  More hardware might make each individual step faster, but only up to the limits of current hardware.  It's going to take 10,000 steps to process 10,000 tokens, no matter what.

Second, essentially since everything that's come before is boiled down into a set of hidden state signals, the longer ago an input token was processed, the less influence it can have on the final result (the "vanishing gradient problem").  Even if a token has a large influence on the hidden state when it's processed, that influence will get washed out as more tokens are processed.

Unfortunately, events that happened long ago can be more important than ones that happened more recently.  Imagine someone saying "I don't think that ..." followed by a long, overly-detailed explanation of what they don't think.  The "not" in "don't" may well be more important than the fourth bullet point in the middle.

Even though an RNN works roughly the same way that our brains work, receiving inputs one at a time and maintaining some sort of memory of what's happened, models based purely on hidden state don't perform very well, probably because our own memories do more than just maintain a set of feedback signals.  There have been attempts to use more sophisticated forms of memory in RNNs, particularly "Long Short-Term Memory" (LSTM).  This works better than just using hidden state, and it was the state of the art before transformers came along.

Transformers take a completely different approach.  At each step, they take as input the entire stream of tokens so far.  At timestep 1, the model's output is based on what's happening then.  At timestep 2, it's based on what happened at timestep 1 and what's happening at timestep 2, and so on.  If you only give the model "this happened at timestep 1 and this happened at timestep 2", it should produce the same results whether or not it was ever asked to produce a result for timestep 1.

Processing an input stream at one timestep does not affect how it will process an input stream at any other timestep.  The only remembering going on is remembering the whole of the input stream.  This means that any token in the input stream can be given as much importance as any other.

A transformer consists of two parts.  The first digests the entire input stream and picks out the important parts.  It can do this in multiple ways.  One "head" in a language-processing model might weight based on what words are next to each other.  Another might pay attention to verbs and their objects.  Input tokens are tagged with their position in the stream, so a transformer trained to work on text could weight "I don't think that ..." in early positions as being important, or look for some types of words close to other types of words.

Whatever actually comes out of that stage goes into another network that decides what output to actually produce (this network actually consists of multiple stages, and the whole attention-and-other-processing setup can be repeated and stacked up, but that's the basic structure).

A transformer-based model does this at every timestep, which means that the first input token is processed at every timestep, the second one is processed at every timestep but the first, and so forth.  This means that handling twice as long a stream of input will require approximately four times as much processing, three times as much will require nine times as much and so on.  Technically, the amount of processing require grows quadratically with the size of the input.

For similar reasons, the network that handles attention grows quadratically in the size of the input, at least without some sort of optimization.  In this sense, a transformer is less efficient than an RNN, since it will use more computing resources.

Crucially, though, this can all be done by "feed-forward" networks, that is, networks that don't have feedback loops.  If you want to be able to process a longer stream of input tokens, you'll need a larger network for the attention stage, and probably more for the later stages as well since there will probably be more output from the attention stage, but you can make both of those bigger by throwing more hardware at them.  

Processing twice as big an input stream requires more hardware, but it doesn't take twice as much "wall time" (time on the clock on the wall), even if it takes four times as much CPU time (total time spent by all the processors).  Being able to handle a long stream of input quickly is what enables networks to incorporate what happened in the whole history of a stream when deciding what to output.


Transformer-based models, which currently give the best results, don't process events in the world the same way we do.  They don't remember anything from input token to input token (that is, timestep to timestep).  Instead, they remember everything that has happened up to the current time, and figure out what to do based on that.  

This produces the same kind of effects as our memories do, including the effect of object permanence.  In our case, if we see a ball roll behind a wall, we remember that there's a ball behind the wall (assuming nothing else happens).  In a transformer-based hide-and-seek model, an agent's behavior will likely differ for an input stream that includes a ball moving behind a wall than for one that doesn't, so the model acts like it remembers that there's a ball behind the wall.

It looks like humans are doing something the hide-and-seek agents don't do when dealing with a world of objects, namely maintaining a mental map of the world, even though the agents can produce similar results to what we can.  Again, this shouldn't be too surprising.  Chess engines are capable of "positional play" and other behaviors that were once thought to be unique to humans even though they clearly use different mechanisms.  Chatbots can produce descriptions of seeing, smelling and tasting things that they've clearly never seen, smelled or tasted, and so forth.

Are we "safe" (definitely scare quotes) since these agents aren't forming mental images in the same way we appear to?  Wouldn't that mean that they lack the "true understanding" that we have, or some other quality unique to us, and therefore they won't be able to outsmart us?  I would say don't bet on it.  Chess engines may not have the same sense of positional factors as humans, but they still play much stronger chess.

So are we doomed, then?  I wouldn't bet on that either, for reasons I go into in this post and elsewhere.

The one thing that seems clear is that human memory of the world doesn't work the same way as it does for the hide-and-seek agents, or for AIs built on similar principles.  In both cases there appears to be some sort of processing of a stream of sense input into a model of what's where.  The difference seems to be more that the memory part is happening at a different stage and has a completely different structure.