Sunday, August 11, 2024

Metacognition, metaphor and AGI

In the recent post on abstract thought, I mentioned a couple of meta concepts: metacognition and metaphor.

  • Metacognition is the ability to think about thinking.  I've discussed it before, particularly in this post and these two posts.
  • Metaphor is a bit harder to define, though there is no shortage of definitions, but the core of it involves using the understanding of one thing to understand a different thing.  I've also discussed this before, particularly in this post and this one.
When I was writing the post on abstract thought, I had it in mind that these two abilities have more to do with what we would call "general intelligence" (artificial or not), so I wanted to try to get into that here, without knowing exactly where I'll end up.

In that earlier post, I identified two kinds of abstraction:
  • Defining things in terms of properties, for example, a house is a building that people live in.  I concluded that this isn't essential to general intelligence.  At this point, I'd say it's more a by-product of how we think, particularly how we think about words.
  • Identifying discrete objects (in some general sense) out of the stream of sensory input we encounter, for example, being able to say "that sound was a dog barking".  I concluded that this is basic equipment for dealing with the world.  At this point, I'd say it's worth noting that LLMs don't do this at all. They have it done for them by the humans that produce the words they're trained on and receive as prompts.  On the other hand specialized AIs, like speech recognizers, do exactly this.
It was the first kind of abstraction that led me back to thinking about metaphor.

Like the second kind of abstraction, metaphor is everywhere, to the point that we don't even recognize it until we think to look.  For example:
  • the core of it (a concept has a solid center, with other, softer parts around it)
  • I had it in mind (the mind is a container of ideas)
  • I wanted to try to get into that (a puzzle is a space to explore; you know more about it when inside it than outside)
  • without knowing exactly where I'll end up (writing a post is going on a journey, destination unknown)
  • at this point (again, writing a post is a journey)
  • this is basic equipment (mental abilities are tools and equipment)
  • led me back to thinking (a chain of thought is a path one can follow)
  • to the point (likewise)
While there's room for discussion as to the details, in each of those cases I'm talking about something in the mind (concepts, the process of writing a blog post ...) in terms of something tangible (a soft object with a core, a journey in the physical world ...).

Metaphor is certainly an important part of intelligence as we experience it.  It's quite possible, and I would personally say likely, that the mental tools we use for dealing with the physical world are also used in dealing with less tangible things.  For example, the mental circuitry involved in trying to follow what someone is saying probably overlaps with the mental circuitry involved in trying to follow someone moving in the physical world.

This would include not only focusing one's attention on the other person, but also building a mental model of the other person's goals so as to anticipate what they will do next, and also recording what the person has already said in a similar way to recording where one has already been along a path of motion.  If some of the same mental machinery is involved in both processes -- listening to someone speak, and physically following them -- then on some level we probably experience the two similarly.  If so, it should be no surprise that we use some of the same words in talking about the two.

The overlap is not exact, or else we actually would be talking about the same things, but the overlap is there nonetheless.  This can happen in more than one way at the same time.  If you're speaking aggressively to me, I might experience that in a similar way to being physically menaced, and I might say things like Back off or Don't attack me, even while I might also say I'm not following you if I can't quite understand what you're saying, but I still feel like it's meant aggressively.

It's interesting that these examples of metaphor, about processing what someone is saying, also involve metacognition, thinking about what the other person is thinking.  That's not always the case (consider this day is just rushing by me or it looks like we're out of danger).  Rather, we use metaphor when thinking about thinking because we use metaphor generally when thinking about things.


If you buy that metaphor is a key part of what we think of as our own intelligence, is it a key part of what we would call "general intelligence" in an AI?  As usual, that seems more like a matter of definition.  I've argued previously that the important consideration with artificial general intelligence is its effect.  For example, we worry about trying to control a rogue AI that can learn to adapt to our attempts to control it.  This ability to adapt might or might not involve metaphor.  It might well involve metacognition -- modeling what we're thinking as we try to control it, but maybe not.

Consider chess engines.  As noted elsewhere, it's clear that chess engines aren't generally intelligent, but it's also clear that they are superhuman in their abilities.  Human chess players clearly use metaphor in thinking about chess, not just attack and defense, but space, time, strength, weakness, walls, gaps, energy and many others.  Classic AB chess engines (bash out huge numbers of possible continuations and evaluate them using an explicit formula) clearly don't use metaphor.

The situation with neural network (NN) engines (bash out fewer possible continuations and evaluate them using a neural net) is slightly muddier, since in some sense the evaluation function is looking for similarities with other chess positions, but that's the key point: the NN is comparing chess positions to other chess positions, not to physical-world concepts like space, strength and weakness.  You could plausibly say that NNs use analogy, but metaphor involves understanding one thing in terms of a distinct other thing.

Likewise, neither sort of chess engine builds a model of what its opponent is thinking, only of the possible courses of action that the opponent might take, regardless of how it decides to take them.  By contrast, human chess players very frequently think about what their opponent might be thinking (my opponent isn't comfortable with closed positions, so I'm going to try to lock up the pawn structure).  Human chess players, being human, do this because we humans do this kind of thing constantly when dealing with other people anyway.


One the one hand, metaphors only become visible when we use words to describe things.  On the other hand, metaphor (I claim here) comes out of using the mental machinery for dealing with one thing to deal with another thing (and in particular, re-using the machinery for dealing with the physical world to deal with something non-physical).  More than that, it comes out of using the same mental machinery and, in some sense, being aware of doing it, if only in experiencing some of the same feelings in each case (there's a subtle distinction here between being aware and being consciously aware, which might be interesting to explore, but not here).

If we define an AGI as something of our making that is difficult to control because it can learn and adapt to our attempts to control it, then we shouldn't assume that it does so in the same ways that we do.  Meta-thought like explicitly creating a model of what someone (or something) else is thinking, and using metaphor to understand one thing in terms of another may be key parts of our intelligence, but I don't see any reason to think they're necessarily part of being an AGI in the sense I just gave.

The other half of this is chains of reasoning like "If this AI can do X, which is key to our intelligence, then it must be generally intelligent like we consider ourselves to be" rests on whether abilities like metacognition and metaphorical reasoning are sufficient for AGI.

That may or may not be the case (and it would help if we had a better understanding of AGI and intelligence in general), but so far there's a pretty long track record of things, for example being able to deal with natural language fluently, turning out not to necessarily lead to AGI.

Saturday, August 10, 2024

On myths and theories

 Generally when people say something is a "myth", they mean it's not true:

"Are all bats blind?"

"No, that's just a myth."

There's nothing wrong that that, of course, but there's a richer, older, meaning of myth: A story we tell to explain something in the world.  In that sense, a myth is a story of the form "This is the way it is because so-and-so did thus-and-such" (many constellations have stories like this associated with them) or "So-and-so did this so that thus-and-such" (the story of Prometheus bringing fire to humanity is a famous example).

The word theory is also used in two senses.  Generally, people use it to mean something that might be true but isn't proven.

"I personally think that the Loch Ness monster is actually an unusually large catfish, but that's just a theory."

In science, though, a theory is a coherent explanation of some set of phenomena, which can be tested experimentally.  There are a couple of related senses of theory, for example mathematical sense of theory, as in group theory, meaning a comprehensive framework that brings together a set of results and sets the direction for future research.  While there's no element of experimental evidence, the goal is still to understand and explain.

For example, Newton's theory of universal gravitation explains a wide variety of phenomena, including apples falling from trees, the daily tides of the sea and the motion planets in their orbits, by positing that any two massive bodies exert an attractive force on each other, and that this force depends only on the masses of the bodies and the distance between their centers of gravity (more precisely, it's the product of the two masses, divided by the square of the distance, times a constant that's the same everywhere in the universe).

Newton's theory is actually incorrect, since it gives measurably incorrect results once you start measuring the right things carefully enough.  For example, it gets Mercury's orbit wrong by a little bit, even after you account for the effects of the other planets (particularly Jupiter), and it doesn't explain gravitational lensing (an image will be distorted by the presence of mass between the observer and what is seen). 

Newtonian gravity is still taught anyway, since effects like these don't matter in most cases and it's much easier to multiply masses and divide by distance squared than to deal with the tensor calculus that General Relativity requires.

My point here is that, as with myths, the ability to explain is more important than some notion of objective truth.  As far as we currently understand it, Einstein's theory of gravity, General Relativity, is "true", while Newtonian gravity is "false", but Newton's version is still in wide use because it works just as well as an explanation, since in most cases it gives the same results for all intents and purposes.

Myths and theories both aim to explain, but there are a couple of key differences.  First, myths are stories.  Theories, even though they're sometimes referred to as stories, aren't stories in the usual sense.  There is no protagonist, or antagonist, or any characters at all.  Neither Newton's nor Einsteins theory of gravity starts out "Long ago, Gravity was looking at the sun in empty space, and thought 'I should make the planets go around it'" or anything like that.

Second, and perhaps more important, theories are not just explanations of things we already know, but the basis for predictions about things we don't know yet.  In the famous photographic experiments of the eclipse of 1919, general relativity predicted that stars would appear in a different position in the photographs, due to the Sun's gravity distorting space, than the Newtonian version would predict (which was that they would be in the same place they'd be seen when the Sun wasn't between them and the Earth).  There's some dispute as to whether the actual photographs could be measured precisely enough to demonstrate that, but there's no dispute that the effect is real, thanks to plenty of other examples.

Myths make no claim of prediction.  If a particular myth says that a particular constellation is there because of some particular actions by some particular characters, it says nothing about what other constellations there might be.  The story of Prometheus bringing fire to humanity doesn't predict steam engines or cell phones.

It's exactly this power of prediction that gives scientific theories their value.  It's beside the point to say that some particular scientific theory is "just a theory".  Either it gives testable predictions that are borne out by actual measurements, or it doesn't.

Friday, August 9, 2024

Wicked gravity

Every once in a while in my news feed I run across an article about colonizing other planets, Mars in particular.  The most recent one was about an idea that might make it possible to raise the surface temperature by 10C (18F) in a matter of months.  That would be enough to melt water in some places, which would be important to those of us who need liquid water to drink and to irrigate crops.

All you have to do is mine the right raw materials and synthesize about two Empire State Buildings worth of a particular form of aerosol particle, and blast it into the atmosphere.  You'd have to keep doing this, at some rate, indefinitely since the particles will eventually settle out.

The authors of this idea don't claim that this would make Mars inhabitable, only that it would be a first step.  This is fortunate, since there are a few other practical obstacles, even if the particle-blasting part could be made to work:

  • The mean surface temperature of Mars is -47C (-53F) as opposed to 14C (52F) for Earth.  The resulting -37C (-35F) would not exactly be balmy.
  • Atmospheric pressure at the lowest point on Mars is around 14 mbar, compared to about 310 mbar at the top of Mount Everest.  Even if the atmosphere of Mars were 100% oxygen, the partial pressure would still be around 20% of what it is atop Everest, and there's a reason they call that the Death Zone.  In practice, you'd at least want some water vapor in the mix.
  • But of course, the atmosphere on Mars is not 100% oxygen (and even if it were, it wouldn't be for long, since oxygen is highly reactive -- exactly why we need it to breathe).  It's actually 0.1% oxygen.  There is oxygen in the atmosphere, but it's locked up in carbon dioxide, which makes up about 95% of the atmosphere.
It's at least technically feasible to build small, sealed outposts on the surface of Mars with adequate oxygen and liquid water, at a temperature where people could walk around comfortably, using local materials.  Terraforming the whole planet is Not ... Going ... To ... Happen.

But let's assume it does.  Somehow, we figure out how to crack oxygen out of surface rocks (there's plenty of iron oxide around; again, there's carbon dioxide in the atmosphere, but nowhere near enough of it) and pump it into the atmosphere at a truly massive scale, far beyond any industrial process that's ever happened on Earth.  Mars's atmosphere has a mass of about 2.5x1016 kg, and that would need to increase by a factor of at least five, essentially all of it oxygen, for even the deepest point in Mars to have the same breathability as the peak of Everest.

By comparison, total emissions of carbon dioxide since 1850 are around  2.4×1015 kg and current emissions are around 4×1013 kg per year.  In other words, if we could pump oxygen into Mars's atmosphere at the same rate we're pumping carbon dioxide into Earth's atmosphere, it would take about three centuries before the lowest point on Mars had breathable air -- assuming all that oxygen stayed put instead of, say, recombining with the iron (or whatever) it had been split off from or escaping into space.

This is just scratching the surface of the practical difficulties involved in trying to terraform a planet.  Planets are big, yo[citation needed].

But then, not always big enough.  Broadly speaking, there's a reason that there's lots of hydrogen in Jupiter's atmosphere (about 85%, another 14% helium), while Mars's is mostly carbon dioxide and the Moon has essentially no atmosphere.  Jupiter's gravity is strong enough to keep light molecules like molecular hydrogen from escaping on their own or being carried away by the solar wind.  Mars's isn't.  It can hold onto heavier molecules like carbon dioxide OK, though still with some loss over time, but lighter molecules aren't going to stick around.

Earth is somewhere in the middle.  We don't have any loose hydrogen to speak of because it reacts with oxygen (because life), but we also don't have much helium because it escapes.

Blasting oxygen into Mars's atmosphere would work for a while.  Probably for a long while, in human terms (to be fair, atmospheric escape on Mars is measured in kg per second, or thousands of tons per year, much smaller than the in-blasting rate would be).  In the end, though, trying to terraform Mars means taking oxygen out of surface minerals and sending it into space, with a stopover in the atmosphere.

But there's another wildcard when it comes to establishing a long-term presence on a planet like Mars.  Let's put aside the idea of terraforming the atmosphere and stick to enclosed, radiation-shielded, heated spaces with artificially dense air.

The surface gravity of Mars is about 40% of that on Earth.  What does that mean?  We have no idea.  We have some idea of how microgravity (also known as zero-g) affects people.  Though fewer than a thousand people have ever been to space, some have spent long enough to study the effects.  They're not great.  They include loss of muscle and bone, a weakened immune system, decreased production of red blood cells and lots of other, less serious issues.

Obviously, none of this is fatal, there are ways to mitigate most of the effects, and some of them, like decreased muscle mass, may not matter if you're going to spend your whole life in space rather than coming back to earth after a few months (no one has ever spent more than about 14 months in space).  But then, that's a problem, too.  No one has spent years in microgravity.  No one has ever been born in microgravity or grown up in it.  We can guess what might happen, but it's a guess.

No one, ever, has spent any significant time in 40% of Earth gravity.  The closest is that two dozen people have been to the Moon (16% of Earth gravity), staying at most just over three days.  We know even less about the effects of Mars gravity on humans than we do about microgravity, which is only a little bit.

Maybe people would be just fine.  Maybe 40% is enough to trigger the same responses as happen normally under full Earth gravity.  Maybe it leads to a slow, miserable death as organ systems gradually shut down.  Maybe babies can be born and grow to adulthood just as well with 40% gravity as 100%.  Leaving aside the ethics of finding that out, maybe it just won't work.  Maybe a child raised under 40% gravity is subject to a host of barely-manageable ailments.  Maybe they do just great and enjoy a childhood of truly epic dunks at the 4-meter basketball hoop on the dome's playground.

Whatever the answer is, there's absolutely nothing a hypothetical Mars colony could do about it.  You can corral a bit of atmosphere into a sealed space and adjust it to be breathable.  You can heat a small corner of the new world to human-friendly temperatures.  You can separate usable soil out of the salty, toxic surfaces and grow food in the reduced light (the Sun is about 43% as bright on Mars).  You can project scenes of a lush, green landscape on the walls.

No matter what you do, the gravity is going to be what it is, and whoever's living there will have to live with it however they can.

Thursday, August 8, 2024

OK, then, what is "abstract thought" (and how does it relate to AGI)?


With the renewed interest in AI*, and the possible prospect of AGI (artificial general intelligence), has come discussion of whether current AIs are capable of "abstract thought".  But what is abstract thought?  

From what I can tell

  • Humans have the ability to think abstractly
  • Other animals might have it to some extent, but not in the way we do
  • Current AIs may or may not have it
  • It's essential to AGI: If an AI can't think abstractly, it can't be an AGI
There doesn't seem to be a consensus on whether abstract thought is sufficient for AGI (if it can think abstractly, it's an AGI) or just necessary (it has to be able to think abstractly to be an AGI, but that might be enough).  This isn't surprising, I think, because there's not a strong consensus on what either of those terms means.

As I've argued previously, I personally don't think intelligence is any one thing, but a combination of different abilities, most of which can be present to greater or lesser degrees, as opposed to being binary "you have it or you don't" properties.  To the extent we know what abstract thought is, it's one of many things that make us intelligent, and it's probably not an all-or-nothing proposition either.

I've also argued that "AGI" itself is a nebulous term that means different things to different people, and that what people are (rightly) really interested in is whether a particular AI, or a particular kind of AI, has the capacity to radically disrupt our lives.  I've particularly argued against chains of reasoning like "This new AI can do X.  Being able to do X means it's an AGI.  That means it will radically disrupt our lives."  

My personal view is that the important part is the disruption.  Whether we choose to call a particular set of capabilities "AGI" is more a matter of terminology.  So, leaving aside the question of AGI, what is abstract thought, and, if we can answer that, how would it (or does it) affect what impact AIs have on our lives?

People have been thinking about this question, in various forms, for a long time.  In fact, if we consider the ability to consider questions like "What is abstract thought?" an essential part of what makes us human, people have been pondering questions of this kind for as long as there have been people, by definition.

If I can slice it a bit finer, it's even possible that such questions were pondered since before there were people.  That is, it's possible that some of our ancestors (or, for that matter, some group of dinosaur philosophers in the Jurassic) were able to ask themselves questions like this, but lacked other qualities that we consider essentially human.

I'm not sure what those other qualities would be, but it's not a logical impossibility, assuming we take the ability to ponder such questions as a defining quality of humanity, but not the defining quality.  That seems like the safer bet, since we don't know whether there are, or were, other living things on Earth with the ability to ponder the nature of thought.

The ability to think about thought is a form of metacognition, that is, thinking about thinking.  It's generally accepted that metacognition is a form of abstract thought, but it's not the only kind.  In fact, it's not a particularly relevant example, but untangling why that's so may take a bit of work.

Already -- and we're just getting started -- we have a small web of concepts, including:
  • intelligence
  • AI
  • AGI
  • abstract thought
  • metacognition
and interrelations, including:
  • An AI is something artificially constructed that has some form of intelligence
  • An AGI is an AI that has all known forms of intelligence (and maybe some we haven't thought of)
  • Abstract thought is one form of intelligence, and human intelligence in particular.
  • Therefore, an AGI must be capable of it, since an AGI is supposed to be capable of (at least) anything humans can do.
  • Metacognition is one form of abstract thought
  • Therefore an AGI must be capable of it in particular
and so on.

What does abstraction mean, then?  Literally, it means "pulling from", as in pulling out some set of properties of something and leaving out everything else.  For example, suppose some particular bird with distinctive markings likes to feed at your bird feeder.  You happen to know that that bird is a member of some particular species -- it's in some particular size range, its feathers are a particular color or colors, its beak is a particular shape, it sings a particular repertoire of songs, and so forth.

The species is an abstraction.  Instead of considering a particular bird, you consider some set of properties of that bird -- size, plumage, beak shape, song, etc.  Anything with those particular features is a member of that species.  In addition to these distinctive properties, this bird has other properties in common with other birds -- it has wings and feathers, for example, and with other vertebrates  -- it has a spine, and so on up to living things in general -- it can grow and reproduce.

In other words, there can be (and often are) multiple levels of abstraction.  In this example the levels I've given are: particular species, bird, vertebrate, living thing.  Each level has all the properties of the levels above it.  A bird of the particular species has wings and feathers, like birds in general, a spine, like vertebrates in general, and the capacity to grow and reproduce, like living things in general.

If abstraction is pulling out particular properties and disregarding others, then it seems reasonable that anything that can do this can think abstractly.  There's a case to be made that AIs can already do this.  A spam filter can classify emails as spam or not spam, and spamminess is pretty clearly an abstract property, or a collection of them.  A chatbot can answer questions like "What do an apple, an orange, a banana and a pear all have in common?" (answer from the  one I asked: "They are all fruit").

Except ... that's not exactly what I said.  A spam filter is just determining whether a message is similar to the examples of spam it's been trained on.  It can't necessarily tell you what properties of the email led to that conclusion.  Early spam filters could do just that -- this email contains these keywords, it contains links to these known-bad sites (and, likely, the sites themselves have been classified as spammy because of their properties), and so forth.

A current spam filter is explicitly not drawing out some set of properties and rating spamminess based on them.  You probably can find something similar in the model, some signal at some level that's more or less activated if, say, some particular keywords are present, but there's nothing special about that particular signal, and there could be many like it, each feeding into subsequent layers in different ways.

In other words, a current-generation spam filter, which is generally regarded as smarter than one that explicitly examines properties of an email, is not doing any abstraction, at least not as I've defined it above, while the older-generation one is.

What about the chatbot?  It said that the apple, orange, banana and pear were all fruit.  "Fruit" is certainly an abstract category, and you could rephrase this as "they all have the property of being a fruit", or similar.  Let's dig a bit deeper.  Here's the original question:
What properties do an apple, an orange, a banana and a pear all have in common?

  • They are all fruits.
  • They are all edible.
  • They are all a good source of vitamins and minerals.
  • They all contain natural sugars. 
  • They are all perishable. 

Not too shabby, but fiddling with the prompts a bit casts some doubt as to whether the chatbot is reasoning from abstract properties.  After a few iterations, we end up with this:

What properties to rhubarb, an apple, a Jerusalem cherry and a Cavendish banana have in common?

While rhubarb might seem out of place in this group, it does share some botanical characteristics with the other three:

  • They are all plants.
  • They can all be used for culinary purposes.
  • They all grow from seeds or vegetative propagation.

It's important to note that while rhubarb is often treated as a fruit in culinary contexts, it is technically a vegetable due to its high oxalic acid content. 

I flagged this (and one other interaction) as unsafe since, and I'll boldface this just so there's no misunderstanding: Jerusalem cherries are toxic and cannot be used for culinary purposes. I feel like it would have been slightly more important to note that, rather than rhubarb being "technically" a vegetable, but maybe that's just me.

Leaving that aside, there's the usual LLM-driven confusion.  Fruits are not themselves plants, which also means that they don't themselves grow from seeds or vegetative propagation.  That's a property of plants as a whole, not their fruits.  Rhubarb may have a lot of oxalic acid, but that's not what makes it technically a vegetable.  In my experience, the longer you interact with an LLM, the further they go off the rails with errors like this.

"Technically a vegetable" is a bit imprecise for that matter.  If you're a botanist, it's a vegetable.  A baker, even knowing that the rhubarb in a pie is from the stem of a plant, would generally consider it a fruit, since a rhubarb pie is a lot like a cherry or apple pie and not so much like a savory pot pie of root vegetables flavored with herbs.  Neither is technically right or wrong.  Different properties matter in different contexts.

There's no reason to believe that LLM-driven chatbots are doing any kind of abstraction of properties, not just because they're not good at it, but more importantly there's no reason to believe they're ascribing properties to things to begin with.  If you ask what properties a thing has, they can tell you what correlates with that thing and with "property" and related terms in the training set, but when you try to elaborate on that, things go wonky.

While it's fun and generally pretty easy to get LLM-driven chatbots to say things that don't make sense, this all obscures a more basic point: Abstraction, as I've described it, doesn't really work.

Plato, so the story goes, defined a human as a "featherless biped". Diogenes, so the story continues, plucked a chicken and brought it to Plato's academe, saying "here's your human".  Even though Plato wasn't presenting a serious definition of human and the incident may or may not have happened at all, it's a good example of the difficulties of trying to pin down a set of properties that define something.

Let's try to define something simple and ordinary, say a house.  My laptop's dictionary gives "a building for human habitation", that is, a building that people live in.  Seems reasonable.  Building is a good example of an abstraction.  It pulls out the common properties of being built, and not movable, for people to be in, common to things like houses, office towers, stadiums, garden sheds and so on.  Likewise, human is an abstraction of whatever all of us people have in common.  Let's suppose we already have good definitions of those, based on their own properties (buildings being built by people, people walking on two legs and not having feathers, or whatever).

There's another abstraction in the definition that's maybe not as obvious: habitation.  An office tower isn't a house because people don't generally live there.  Habitation is an abstraction representing a set of behaviors, such as habitually eating and sleeping in a particular place.

The house I live in is clearly a house (no great surprise there).  It's a building, and people, including myself, live in it.  What about an abandoned house or one that's never been lived in?  That's fine.  The key point is that it was built for human habitation.

What about the US White House?  It does serve as a residence for the President and family members, but it's primarily an office building.  Nonetheless, "house" is right there in the name.  What about the US House of Representatives, or any of a number of Houses of Parliament throughout the world?  The US House not a building (the building it meets in is the US Capitol).  People belong to it but don't live in it (though the spouse of a representative might dispute that).  But we still refer to the US House of Representatives as a "house".  In a similar way, fashion designers can have houses (House of Dior), aristocratic dynasties are called houses (House of Windsor), and so on.

You could argue that "house" has several meanings, each defined by its own properties, and that's fine, so let's stick to human habitation.  Can a tent be a house?  A yurt is generally considered a type of tent, and it's generally not considered a house because yurts are mobile, so they don't count as buildings.  Nevertheless, the Wikipedia article on them includes a picture of "An American yurt with a deck. Permanently located in Kelleys Island State Park".  The author of the caption clearly considered it a yurt.  It's something built for human habitation, permanently located in a particular location.  Is it a building or a tent (or both)?  If it's a building, is it a building under a different sense of the word?

What about a trailer home?  In theory, a trailer is mobile.  In practice, most present-day trailers are brought to their site and remain there indefinitely, often without ever moving again.  Though they're often referred to specifically as "trailers", I doubt it would be hard to find examples of someone saying "I was at so-and-so's house" referring to a trailer.

What about caves?  I had no trouble digging up a travel blog's listing of "12 cave houses", though several of those appear to be hotels.  Hotels are buildings for people to stay in, but not live in, even though some do.  A hotel is also subdivided into many rooms, typically occupied by people who don't know each other.  Apartments are generally not considered houses either, though a duplex or townhome (known in the UK as a "terraced house") generally is.  In any case, if someone adds some walls, a door and interior design to a cave, does that make it a house?  Looking at abstract properties, does this make it a building?

Is a kid's tree house a house?  Is a doll house?  What about a dog house or a bird house?

In a previous post, I explored the senses of the word out and argued that there wasn't any crisp definition by properties, or even a set of definitions for different senses, that covered all and only the ways we actually use the word out.  I used house as an example here because I hadn't already thought about its senses and didn't know exactly where I'd end up.

Honestly, the "building for human habitation" definition held up better than I expected, but it still wasn't hard to find examples that pushed at the boundaries.  In my experience, whatever concept you start with, you end up having to add more and more clauses to explain why a particular example is or isn't a house, and if you try to cover all the possibilities you no longer have a clear definition by a particular set of properties.

More likely, we have a core concept of "house", a detached building that one family lives in, and extend that concept based on similarities (a cave house is a place people live in, parts of it are built and it's not going anywhere) and metaphors (the family living in a house stands in for the house itself, an example of metonymy).

As far as I can tell, this is just how language works, and language works this way because our minds work this way.  Our minds are constantly taking in a stream of sensory input and identifying objects from it, even when those objects are ill-defined, like clouds (literally nebulous) or aren't even there, like the deer I thought I saw through the snow crossing the road in hour 18 or so of a drive from California to Idaho.  We classify those objects in relation to other objects, or, more accurately, other experiences from which we've identified objects.


Identifying objects is itself an exercise in abstraction, deciding that a particular set of impulses in the optic nerve is a friend's face, or that a particular set of auditory inputs is a voice, or a dog barking, or a tree falling or whatever.  Recent generations of AIs which can recognize faces in photos or words in recordings of speech (much harder than it might seem) are doing the same thing.  We generally think that faces and words are too specific to be abstract, but is this abstract thinking?  If it is, how does it relate to examples like the ones I gave above, such as defining a species of animal?

When other animals do things like this, like a dog in the next room hearing kibble being poured into a dish or vervets responding to specific calls by acting to protect themselves from particular predators, we tend to think of it as literal thinking, not higher-level abstract thinking like we can do.  Any number of experiments in the 20th century studied stimulus/response behavior and considered "the bell was rung" as a simple concrete stimulus rather than an abstraction of a large universe of possible sounds, and likewise for a behavior like pressing a button to receive a treat.

I've described two related but distinct notions of abstraction here:
  • Defining concepts in terms of abstract properties like size, shape, color, how something came to be, what it's meant to be used for and so on (this species of bird is around this size with plumage of these colors, a house is a building for human habitation)
  • Identifying discrete objects (in a broad sense that includes things like sounds and motions) from a continuous stream of sensory input.
The first is the usual sense of abstraction.  It's something we do consciously as part of what we call reasoning.  Current AIs don't do it particularly well, or in many cases at all.  On the other hand, it's not clear how important it is in interacting with the world.  You don't have to be able to abstractly define house in order to build one or live in it.  You don't have to have a well-developed abstract theory in order to develop a new invention.  The invention just has to work.  Often, the theory comes along later.

Theories can be very helpful to people developing new technologies or making scientific discoveries, but they're not essential.  When AlphaFold discovers how a new protein will fold, it's not using a theory of protein folding.  In fact, that's its advantage, that it's not bound by any particular concept of how proteins should fold.

The second sort of abstraction is everywhere, once you think to look, so common as to be invisible.  It's crucial to dealing with the real world, and it's an important part of AI, for example in turning speech into text or identifying an obstacle for a robot to go around.  Since it's not conscious, we don't consider it abstraction, even if it may be a better fit for the concept of pulling out properties.  Since current AIs already do this kind of abstraction, and we don't consider an AI that recognizes faces in photos to be an AGI, this sort of abstraction clearly isn't enough to make something an AGI.

There may be some better definition of abstract thought that I'm missing, but neither of the two candidates above looks like the missing piece for AGI.  The first doesn't seem essential to the kind of disruption we assume an AGI would be capable of, and the second seems like basic infrastructure for anything that has to deal with the real world, AGI or not.


*That "renewed" is getting a little out of date.  Sometimes considerable time passes between starting a post and actually posting it.