Sunday, June 18, 2023

AI seems to be back. What is it now?

In one of the earliest posts here, several years ago, I mused What was AI?  At the time, the term AI seemed to have fallen out of favor, at least in the public consciousness, even though there were applications, like phones with speech recognition, that were very much considered AI when they were in the research phase.  My conclusion was that some of this was fashion, but a lot of it was because the popular conception of AI was machines acting like humans.  After all, the examples we saw in pop culture, like, say, C-3PO in Star Wars, were written by humans and portrayed by humans.

There's a somewhat subtle distinction here: A phone with speech recognition is doing something a human can do, but it's not acting particularly like a human.  It's doing the same job as a human stenographer, whether well or badly, but most people aren't stenographers, and even stenographers don't spend most of their time taking dictation (or at least they shouldn't).

Recently, of course, there's been a new wave of interest in AI, and talk of things like "Artificial General Intelligence", which hadn't exactly been on most people's lips before ChatGPT-4 came out.  To avoid focusing too much on one particular example, I'll call things like ChatGPT-4 "LLM chatbots", LLM for Large Language Model.

By many measures, an LLM chatbot isn't a major advance.  As the "-4" part says, ChatGPT-4 is one in a series, and other LLM chatbots were developed incrementally as well.  Under the hood, an LLM chatbot is a particular application of neural net-based machine learning, which was a significant advance, to the particular problem of generating plausible-sounding text in response to a prompt.

But goodness, do they produce plausible-sounding text.

A response from an LLM chatbot may contain completely made-up "facts", it may well break down on closer examination by followup questions or changing the particulars of the prompt, and it may have a disturbing tendency to echo widely-held preconceptions whether they're accurate or not, but if you just read through the response and give it the benefit of the doubt on anything you're not directly familiar with, something people are strongly inclined to do, then it sounds like the response of someone who knows what they're talking about.  The grammar is good, words are used like people would use them, the people and things mentioned are generally real and familiar, and so on.

In other words, when it comes to generating text, an LLM chatbot does a very good job of acting like a human.  If acting like a human is the standard for AI, then an LLM chatbot is definitely an AI, in a way that a speech-transcribing phone app or a model that can pick out supernovae from a mass of telescope images just isn't.

But our perception of whether something is acting intelligent in a human way is heavily tilted toward language use.  All kinds of animals can recognize images and many can respond to speech, but only we can produce large volumes of text in human languages in response to a prompt.  Until now, that is.

Since LLM chatbots are an obvious major advance in acting-like-a-human, it's natural to assume that they represent a major advance in understanding what intelligence is generally, but those are two very different things.  As far as I can tell, we're not really any closer to understanding what "general intelligence" might mean, or how to produce it, than we were before.

To be sure, LLMs have shown some interesting behaviors that may offer hints as to what intelligence might be.  Once the model gets big enough, it seems to get better at learning from small samples.  For example, if you train a model on a huge amount of English text and a little bit of Italian and a little bit of Python code, it seems that it can do a pretty good job of producing plausible Italian or plausible-looking Python.  The Python code might well not do anything useful, unless it's close to one of the examples that the model was trained on, and it might not even build, but it looks like Python someone wrote.  There's some question as to whether this ability arises suddenly once the model hits a certain size, or actually happens gradually, but it's definitely a thing.

It seems likely that, after enough training data and with enough depth to the underlying network, an LLM develops connections that in some way represent something general about the structure of languages, at least ones like Italian and Python that are reasonably close in structure to the English that makes up the bulk of the training data.  It would be interesting to explore that, because it may capture something about language structure that current theories of grammar miss.


It seems reasonable that a suitably deep model trained on larger amounts of language would be able to capture something useful about the structure of language.  Beyond that, though, is it possible that an LLM can capture something about the world that the body of text is talking about?  If the training data contains millions of words about water, and rivers, and thirst, and drinking water, and clouds, rain and so forth, will it in some sense know that water is wet and that rain falls from clouds and some of it drains into rivers that flow into the sea, and that living things need water and that if a person is thirsty they can drink water and not be thirsty any more?

From interacting with an LLM chatbot, it's certainly easy to conclude at first that the answer is yes.  For example, I just asked a chatbot "If I'm thirsty and I drink water, will I still be thirsty?" and got back

Generally, if you are thirsty and you drink water, you will no longer be thirsty. This is because thirst is a signal from your body that it needs water. When you drink water, your body's water levels will increase and the thirst signal will be turned off.

However, there are some cases where you might still be thirsty after drinking water ...

and so on.  That seems like case closed.  One good way to find out if someone knows something is to ask them questions about it and see if the answers make sense.  I did, and the answer made sense.

The problem is, LLM chatbots have a tendency to go down rabbit holes once you get into details.  Later, I asked "If I'm thirsty and I drink ambrosia, will I still be thirsty?"  The answer starts off reasonably:

In Greek mythology, ambrosia is the food of the gods. It is said to be a delicious and intoxicating beverage that grants immortality. If you were to drink ambrosia, you would no longer feel thirst, as it would quench your thirst permanently.

So far, so good.  Going on .. 

However, ambrosia is not a real substance, so it is impossible to say for sure what would happen if you drank it.

This is a bit ... off.  If ambrosia is not a real substance, then you can't drink it, period.  It's not impossible to be sure what would happen, it's impossible to say, period.  The answer goes on to conclude:

However, it is an interesting question to ponder, and it is one that has been explored in many works of fiction.

OK, I'll bite.  "What are some works of fiction that explore the drinking of ambrosia?"  By "explore", I'd expect something like a storyline that features a person, or multiple people, drinking ambrosia in various amounts or circumstances and discusses the consequences of the various possibilities.  I wouldn't want something that just "features" or "includes" the drinking of ambrosia.

Nitpicking?  I mean, isn't it close enough just to mention some stories that include ambrosia and count that as exploring?

No.  If the chatbot has a true understanding of the concepts behind the words, it will catch this distinction.  If it has merely picked up from its training data that "explores" is a word often used when discussing fiction, it won't.  In my experience, LLM chatbots miss this sort of distinction over and over again.

So what works did it come up with?

  • The Odyssey by Homer: In this epic poem, the hero Odysseus drinks ambrosia and nectar while he is a guest of the gods on Mount Olympus. This gives him the strength and vitality to continue his journey home.
The first time through, I took this at face value.  It's been a while since I read the Odyssey.  But it doesn't look like this scene actually occurs.  Ambrosia is mentioned, briefly, in other contexts.  Even if this scene were in the poem, would it really be exploring the drinking of ambrosia?  Not really.  It just records the common understanding in his time and place of what ambrosia was and what it did.
  • The Bacchae by Euripides: In this tragedy, the god Dionysus gives ambrosia to his followers, which makes them immortal. However, the ambrosia also makes them insane, and they go on a rampage, killing people and destroying property.
That would count as an exploration in my book.  One problem, though: That's not the plot of The Bacchae.  From what I can make out, without carefully reading through the whole play, there's no mention at all of ambrosia in The Bacchae.
  • The Tempest by William Shakespeare: In this play, the magician Prospero uses ambrosia to create a magical elixir that can grant youth, beauty, and immortality.
Nope.  The word "ambrosia" is not even in the play.
  • The Last Olympian by Rick Riordan: In this book in the Percy Jackson series, the Greek gods give ambrosia to the demigod heroes to heal their wounds and give them strength.
That's probably about right -- I'm not going to take time to look, given the rest of the entries -- but again, not really exploring the question of what it would be like to drink ambrosia.
  • The Mortal Instruments by Cassandra Clare: In this series of fantasy novels, the main characters drink ambrosia to heal their wounds and give them strength.
As far as I can tell, this is not so.

OK, so maybe the chatbot understands water perfectly well, but it's not good with literature and finer points like what's in it?  Let's try something more direct.

"If I have a bell, a book, a candle and some water, how can I stack them?"

(I picked stacking things because I ran across a pretty impressive example in a news item, of a chatbot suggesting a solution the author hadn't thought of.  I have no doubt this happened, but I strongly suspect that the author took the impressive example as "proof" and didn't press on by, say, asking variants of the original question.)

  • There are a few ways to stack a bell, a book, a candle, and some water. Here are a few examples:
    • Bell, book, candle, water: This is the most straightforward way to stack the items. The bell would be on the bottom, followed by the book [somehow balanced on top of the bell ...], the candle, and the water.
    • Water, bell, book, candle: This stack is a bit more challenging, as the water would need to be supported by the bell and the book [wait, what?]. However, it is possible to do this by placing the water in a small container, such as a cup or a bowl [so in the first stack, the water isn't in a container?].
    • ...
I think the easiest explanation for this one is that the chatbot doesn't really understand what physical objects and substances are and how they interact.

In some meaningful sense, the model understands something interesting about the structure of languages, or at least some languages, because it was trained on a mass of text.  It can also model associations among words, for example that "water" tends to be associated with words like "wet" and "drink" and "flow".

However, since its training had nothing to do with actual water, or anything else in the physical world, it's no surprise that it doesn't show any real understanding of how things interact in the physical world.  Since the training text doubtless included reviews and discussions of various works of fiction, it can imitate something talking about fiction, but it misses details like the meaning of "explore", or even that if you say a story explores something, that thing should actually appear in the story.


So, after that fairly long digression, how does all this fit together?
  • Except among people closely associated with the research, "AI" generally means "acting like a human" and "doing things that are particularly human", like using language
  • LLM chatbots are pretty good at acting human in that sense ...
  • ... including making up plausible-looking responses when they don't really know the answer
  • But language is just one part of the picture.
  • "General Intelligence" is not at all well-defined, but if it includes some sort of general understanding of the world and how to solve problems in it, then there's no real reason to think LLM chatbots have it, or are even close to acquiring it ...
  • ... even if they're sometimes good at looking that way

Saturday, June 17, 2023

Where did I put my car keys, and when did civilization begin?

Some mysteries, like "Where did I put my car keys?" can be solved by discovering new information.  Some of the more interesting ones, though, may resolve by realizing you were asking the wrong question in the first place.

For example, physicists spent a long time trying to understand the medium that light waves propagated in.  Just like ocean waves propagate in water and sound waves propagate in all kinds of material -- but not in a vacuum -- it seemed that light waves must propagate in some sort of medium.  "Luminiferous aether", they called it.

But that brought up questions of what happens to light if you're moving with respect to that medium.  Sounds in the air will sound higher-pitched if you're moving through still air toward the sound, or if the wind is blowing and you're downwind, and so on (examples of the Doppler effect).  There didn't seem to be a "downwind" with light.  The Earth orbits the Sun at about 0.001% of the speed of light, not much, but enough that a careful measurement should detect a change in frequency depending on which direction light is moving and where the Earth is in its orbit.  But it didn't, and people spent a lot of time trying to figure out what was happening with the aether until Einstein put forth a theory (special relativity) that started with the idea that there was no aether.

I just got done scanning through the older posts on this blog to see whether I'd discussed a question that comes up from time to time, in various forms, when discussing human prehistory: "What happened a few thousand years ago in human evolution, that enabled us to move from hunter-gatherer societies to full-blown civilization?"  The closest I could find was a comment at the end of a post on change in human technology:

How did civilization and technology develop in several branches of the human family tree independently, but not to any significant extent in others?

This is not quite the same question, but it's still not a great question because it's loaded with similar assumptions.   All societies have technology and rules of living together, so we're really talking about who has "advanced technology" or "higher forms of social organization" or whatever, which are not exactly the most objective designations.  But even taking those at face value, I think this is another "wrong question" like "What happens if you're moving with respect to the aether?"

Even if you try to stick to mostly objective criteria like whether or not there are cities (civilization ultimately derives from the same roots as Latin civitas -- city -- and civis -- citizen), or whether a particular group of people could smelt iron, there's a lot we don't know about what happened where and when once you go back a few thousand years, and even where we think we do know, the definitions are still a bit fuzzy.  How big does a settlement have to be to be considered a city?  How much iron do you have to smelt before you're in the "iron age"?  Any amount? Enough to make a sword?  Enough to manufacture swords by the hundred?

Wikipedia (at this writing) defines a civilization as "any complex society characterized by the development of a state, social stratification, urbanization, and symbolic systems of communication beyond natural spoken language (namely, a writing system)" with eight separate supporting citations.  I didn't check the page history, but one gets the impression that this definition evolved over time and much discussion.

By this definition, civilizations started appearing as soon as writing appeared.  In other words, writing is the limiting factor from the list above.  The first known examples (so far) of writing, Sumerian cuneiform and Egyptian hieroglyphs, are about 5400 years old.  By that time there had been cities for thousands of years.  Terms like "state" and "social stratification" are harder to pin down from hard archeological evidence, or even to define objectively in a way people can agree on, but it's pretty clear that, however you slice it, they came well before cuneiform and hieroglyphics.

It may be hard to pin down exactly what a state is, but it's not hard to find examples that people will agree are states.  Most of the world's population now lives in places that most people agree are states, even though there are disagreements about which people are subject to the rules of which state or whether a particular nation's government is effectively functioning as a state.  Nonetheless, if you asked most political scientists whether, say, New Zealand, Laos or Saint Lucia is a state, you'd get a pretty resounding "yes".  Likewise, most people familiar with the subjects would agree that, say, Ancient Rome or the Shang Dynasty or the Inca Empire were states.

The problems come when you try to extract a set of criteria from the examples.  While Wikipedia defines a state as "a centralized political organization that imposes and enforces rules over a population within a territory" it goes on in the very next sentence to say "There is no undisputed definition of a state" (with two supporting references). Wikipedia does not claim to be an authoritative source on its own and I suppose it's possible that the page editors missed the One True Definition of "state", but it seems unlikely.  More likely there really isn't one.

Going with the "centralized political organization ..." definition for the moment, things get slippery when you try to pin down what it means to "impose and enforce rules".  For one thing, except (probably) in the smallest city-states, say Singapore or the Vatican, there is always a tension among various levels of government.

In the US, for example, the federal government is supreme over state and local governments, but in practice it's local laws that mostly determine where you can build a house, how fast you can drive your car on which streets and any of a number of other things that have more visibility in most people's day-to-day life than, say, federal standards for paraffin wax (I checked, there are several).  Certainly the supremacy clause of the Constitution means something, and few would disagree that the federal government imposes and enforces rules throughout the US, or that the US is a state, but on the other hand we also call 50 constituent parts of the US "states" and they impose and enforce their own rules within their boundaries.  Is the State of Wyoming a "state", then, in the sense given above?  If so, is the city of Cheyenne?

This may seem like splitting hairs over definitions, but when you consider something like the Roman Empire, where it could take weeks or months to get a message from the center of government to the far-flung provinces, and the people in those provinces often didn't speak the official language and largely practiced their local religions and customs, and the local power structure was largely still in place, though with some sort of a governor, who may or may not have been Roman, nominally in charge, it's a legitimate question what it might mean to be "part of the Roman Empire" or in what exact territory the imperial state could actually impose and enforce rules at any particular time.

If all you have to go on is excavated ruins without any written records, it's harder still to say what might or might not be a state.  There are monumental constructions going back at least 10,000 years, that would have required cooperation among fairly large numbers of people over years or decades, but that doesn't necessarily mean there was (or wasn't) a centralized government.  So far, no one has found any strong indication that there was.  It's possible that ancient monuments were built at the command of a centralized leadership, but again, there doesn't seem to be any strong evidence to support that, as there definitely is for, say, the Egyptian pyramids.

Likewise for cities.  It's hard to tell by looking at the ruins of a city whether there was a centralized government.  One of the earliest cities known, Çatalhöyük, shows no obvious signs of, say, a City Hall or anything other than a collection of mud-brick houses packed together, though the houses themselves have their own fascinating details.  But then again, neither would any number of large villages / small towns today show obvious signs of a central government.  There may have some sort of centralized government, somewhere, imposing and enforcing rules on Çatalhöyük, but there could very well not have been.  Current thinking seems to be there wasn't.

Empires like the Mongol or Macedonian ones built cities, but most cities in these empires already existed and were brought into the empire by conquest.  If we didn't have extensive written records, it would be much harder to determine that, say, present-day Uch Sharīf, Pakistan, was (possibly) founded by Alexander as part of the Macedonian Empire and was later (definitely) invaded by the Mongols.  While it's a fairly small city of around 20,000 people, it contains a variety of tombs, monuments and places of worship.  If it were suddenly deserted and all writing removed from it, and everything else in the surrounding area were covered in dirt, an archeologist who didn't know the history of the surrounding regions would have a lot of work to do to figure out just what went on when.

Present-day archeologists trying to understand human culture from 10,000 or more years ago are up against a similar situation.  What sites have been discovered are often isolated and what survives has a lot more to do with what sorts of things, like stonework and pottery, are likely to endure for millennia than what was actually there.

In addition, it's clear that while there were cities thousands of years before  Mesopotamian civilization, it's pretty clear most people didn't live in them, but in the surrounding areas, whether nomadically or in villages, and whatever traces they left behind are going to be much harder to find, if they can be found at all.  There's probably at least some selection bias, in that until perhaps recently, there has been more focus on finding signs of civilization, that is, cities, than looking for signs of villages or nomadic peoples.

The result is that we really just don't know that much about how Neolithic people organized themselves.  There are some interesting clues, like the existence of "culture regions" where the same technologies and motifs turn up over and over again across large areas, but it's hard to say whether that's the result of a central government or just large-scale trade and diffusion of ideas (current thinking seems to be that it's probably trade and diffusion).

One of the basic assumptions in talking about civilizations is that civilization requires stable and abundant food supplies so that people can remain in one place over the course of years and at least some people have time to do things besides procuring food.  The converse isn't true, though.  You can have stable and abundant food supplies, and at least the opportunity for people to develop specialized roles, without civilization developing, and that seems to be what actually happened.

Rice was domesticated somewhere between 8,000 and 14,000 years ago, and wheat somewhere in the same range.  Permanent settlements (more technically, sedentism) are at least as old, and there were cultures, such as the Natufian, that settled down thousands of years before showing signs of deliberate agriculture.  Overall, there is good evidence of

  • Permanent settlements without signs of agriculture over periods of millennia (Natufian culture)
  • Large-scale organization without signs of agriculture or permanent settlements (monuments at Göbekli Tepe about 10,000 years old, not to mention later examples such as Stonehenge)
  • Cities without writing, or signs of centralized government (Çatalhöyük, about 9,000 years ago at its peak)
  • Agriculture without large-scale cities, over periods of millennia (domestication of rice and wheat)
  • Food surpluses without grain farming
  • Large-scale trade without evidence of states

Putting this all together

  • There's not really a widely-accepted single definition of what civilization is, particularly since there's no widely-accepted single definition of what concepts like "state" and "social stratification" mean
  • It's hard to say for sure how people organized themselves 10,000 years ago because there's no written record and the physical evidence is scattered and incomplete
  • There are clear signs, particularly monumental structures, that they did organize themselves, at least some of the time
  • There are clear signs that they interacted with each other, whether directly or indirectly, over large areas
  • The various elements of what we now call civilization, particularly agriculture and permanent settlements, didn't arise all at once in one place, but appeared in various combinations over large areas and long periods of time
In other words, there was no particular time and place that civilization began, and questions like the ones I gave at the beginning aren't really meaningful.

Human knowledge has continually evolved and diffused over time.  People have been busy figuring out the world around them for as long as there have been people, and as far as we can tell, people's cognitive abilities haven't changed significantly over the past few dozens of millennia.

Overall, we've become more capable, because, overall, knowledge tends to accumulate over time.  The ability to create what we now call civilization has been part of that, but there was no particular technological change, and certainly no genetic change, that brought about the shift from foraging societies to civilization, because it's not even accurate to talk about "the shift".  There wasn't some pivotal change.  There have been continual changes over large areas and long periods of time that have affected different groups of people in different ways.  We can choose to draw lines around those now, but the results may say more about how we draw lines than about how people lived.

None of this is to say that terms like "civilization" or "state" are meaningless, or that civilizations and states are inherently bad (or good).  Rather, it seems more useful to talk about particular behaviors of particular groups of people and less useful to argue over which groups had "advanced technology" or were "civilized", or to try to say when some group of people crossed some magical boundary between "uncivilized" and "civilized" or when some collection of settlements "became a state".

Among other things, this helps avoid a certain kind of circular reasoning, such as asserting that the people who built Stonehenge must have had an advanced society because only an advanced society could build something like Stonehenge.  What's an advanced society?  It's something that can build monuments like Stonehenge.  I don't think this really represents the current thinking of people who study such things, but such arguments have been made, nearly as baldly.  Better, though, to try to understand how Stonehenge was built and how the people who built it lived and then try to see what led to what.

This also helps avoid a particular kind of narrative that comes up quite a bit, that there is a linear progression from "early, primitive" humanity to "modern, advanced societies".  In the beginning, people lived in a state of nature.  Then agriculture was discovered, and now that people had food surpluses, they could settle down.  Once enough people settled down, they developed the administrative structures that became the modern nation-state as we know it, and so forth.

None of those assertions is exactly false, leaving aside what exactly a "state of nature" might be.  Agriculture did develop, over periods of time and in several places.  Eventually, it enabled higher population densities and larger centers of population, and, in practice, that has involved more elaborate administrative structures.

But that isn't all that happened.  People raised domesticated plants, and eventually animals, and otherwise modified their environments to their advantage, for hundreds or thousands of years at a stretch without building large cities.  Cities arose, but for almost all of human history, as in prehistory, most people didn't live in them -- that's a very recent development.

One problem with this kind of linear narrative is that it can give the impression that there was a sort of dark age, before civilization happened, where people weren't doing much of anything.  If we put the origins of modern humans at, say, 70,000 years ago -- again, at least to some extent this is a matter of where we choose to draw lines, but it couldn't have been much later than that -- then why did it take so long to get from early origins to civilization?  As far as anyone knows, that's a span of over 60,000 years.  What were we doing all that time?

If you require a sharp dividing line between "nothing much going on" and "civilization", this seems like a mystery.  If you don't need such a line, the answer seems pretty mundane, because we were doing pretty much the same thing all the way through:  steadily developing culture, including technology and art.  Eventually, at various times and places, what we now call civilization becomes possible, and some time after that, at some smaller number of times and places, it happens.


One note: This post draws fairly extensively from points made in The Dawn of Everything.  Along with discussing human history, that book explores what implications deep human history might have on how present-day societies might be structured.  I'm not trying to promote or refute any of that here.  Here, I'm more interested in deep human history itself, the stories we tend to build around what we know about it, and how the two can differ.