Monday, September 14, 2020

How real are real numbers?

There is always one more counting number.

That is, no matter how high you count, you can always count one higher.  Or at least in principle.  In practice you'll eventually get tired and give up.  If you build a machine to do the counting for you, eventually the machine will break down or it will run out of capacity to say what number it's currently on.  And so forth.  Nevertheless, there is nothing inherent in the idea of "counting number" to stop you from counting higher.

In a brief sentence, which after untold work by mathematicians over the centuries we now have several ways to state completely rigorously, we've described something that can exceed the capacity of the entire observable universe as measured in the smallest units we believe to be measurable.  The counting numbers (more formally, the natural numbers) are infinite, but they can be defined not only by finite means, but fairly concisely.

There are levels of infinity beyond the natural numbers.  Infinitely many, in fact.  Again, there are several ways to define these larger infinities, but one way to define the most prominent of them, based on the real numbers, involves the concept of continuity or, more precisely, completeness in the sense that the real numbers contain any number that you can get arbitrarily close to.

For example, you can list fractions that get arbitrarily close to the square root of two: 1.4 (14/10) is fairly close, 1.41 (141/100) is even closer, 1.414 (1414/1000) is closer still, and if I asked for a fraction within one one-millionth, or trillionth, or within 1/googol, that is, one divided by ten to the hundredth power, no problem.  Any number of libraries you can download off the web can do that for you.

Nonetheless, the square root of two is not itself the ratio of two natural numbers, that is, it is not a rational number (more or less what most people would call a fraction, but with a little more math in the definition).  The earliest widely-recognized recorded proof of this goes back to the Pythagoreans.  It's not clear exactly who else also figured it out when, but the idea is certainly ancient.  No matter how closely you approach the square root of two with fractions, you'll never find a fraction whose square is exactly two.

OK, but why shouldn't the square root of two be a number?  If you draw a right triangle with legs one meter long, the hypotenuse certainly has some length, and by the Pythagorean theorem, that length squared is two.  Surely that length is a number?

Over time, there were some attempts to sweep the matter under the rug by asserting that, no, only rational numbers are really numbers and there just isn't a number that squares to two.  That triangle? Dunno, maybe its legs weren't exactly one meter long, or it's not quite a right triangle?

This is not necessarily as misguided as it might sound.  In real life, there is always uncertainty, and we only know the angles and the lengths of the sides approximately.  We can slice fractions as finely as we like, so is it really so bad to say that all numbers are rational, and therefore you can't ever actually construct a right triangle with both legs exactly the same length, even if you can get as close as you like?

Be that as it may, modern mathematics takes the view that there are more numbers than just the rationals and that if you can get arbitrarily close to some quantity, well, that's a number too.  Modern mathematics also says there's a number that squares to negative one, which has its own interesting consequences, but that's for some imaginary other post (yep, sorry, couldn't help myself).

The result of adding all these numbers-you-can-get-arbitrarily-close-to to the original rational numbers (every rational number is already arbitrarily close to itself) is called the real numbers.  It turns out that (math-speak for "I'm not going to tell you why", but see the post on counting for an outline) in defining the real numbers you bring in not only infinitely many more numbers, but so infinitely many more numbers that the original rational numbers "form a set of measure zero", meaning that the chances of any particular real number being rational are zero (as usual, the actual machinery that allows you to apply probabilities here is a bit more involved).

To recap, we started with the infinitely many rational numbers -- countably infinite since it turns out that you can match them up one-for-one with the natural numbers* -- and now we have an uncountably infinite set of numbers, infinitely too big to match up with the naturals.

But again we did this with a finite amount of machinery.  We started with the rule "There is always one more counting number", snuck in some rules about fractions and division, and then added "if you can get arbitrarily close to something with rational numbers, then that something is a number, too".  More concisely, limits always exist (with a few stipulations, since this is math).

One might ask at this point how real any of this is.  In the real world we can only measure uncertainly, and as a result we can generally get by with only a small portion of even the rational numbers, say just those with a hundred decimal digits or fewer, and for most purposes probably those with just a few digits (a while ago I discussed just how tiny a set like this is).  By definition anything we, or all of the civilizations in the observable universe, can do is literally as nothing compared to infinity, so are we really dealing with an infinity of numbers, or just a finite set of rules for talking about them?


One possible reply comes from the world of quantum mechanics, a bit ironic since the whole point of quantum mechanics is that the world, or at least important aspects of it, is quantized, meaning that a given system can only take on a specific set of discrete states (though, to be fair, there are generally a countable infinity of such states, most of them vanishingly unlikely).  An atom is made of a discrete set of particles, each with an electric charge that's either 1, 0 or -1 times the charge of the electron, the particles of an atom can only have a discrete set of energies, and so forth (not everything is necessarily quantized, but that's a discussion well beyond my depth).

All of this stems from the Schrödinger EquationThe discrete nature of quantum systems comes from there only being a discrete set of solutions to that equation for a particular set of boundary conditions.  This is actually a fairly common phenomenon.  It's the same reason that you can only get a certain set of tones by blowing over the opening of a bottle (at least in theory).

The equation itself is a partial differential equation defined over the complex numbers, which have the same completeness property as the real numbers (in fact, a complex number can be expressed as a pair of real numbers).  This is not an incidental feature, but a fundamental part of the definition in at least two ways: Differential equations, including the Schrödinger equation, are defined in terms of limits, and this only works for numbers like the reals or the complex numbers where the limits in question are guaranteed to exist.  Also, it includes π, which is not just irrational, but transcendental, which more or less means it can only be defined as a limit of an infinite sequence.

In other words, the discrete world of quantum mechanics, our best attempt so far at describing the behavior of the world under most conditions, depends critically on the kind of continuous mathematics in which infinities, both countable and uncountable, are a fundamental part of the landscape.  If you can't describe the real world without such infinities, then they must, in some sense, be real.


Of course, it's not actually that simple.

When I said "differential equations are defined in terms of limits", I should have said "differential equations can be defined in terms of limits."  One facet of modern mathematics is the tendency to find multiple ways of expressing the same concept.  There are, for example, several different but equivalent ways of expressing the completeness of the real numbers, and several different ways of defining differential equations.

One common technique in modern mathematics (a technique is a trick you use more than once) is to start with one way of defining a concept, find some interesting properties, and then switch perspective and say that those interesting properties are the actual definition.

For example, if you start with the usual definition of the natural numbers: zero and an "add one" operation to give you the next number, you can define addition in terms of adding one repeatedly -- adding three is the same as adding one three times, because three is the result of adding one to zero three times.  You can then prove that addition gives the same result no matter what order you add numbers in (the commutative property).  You can also prove that adding two numbers and then adding a third one is the same as adding the first number to the sum of the other two (the associative property).

Then you can turn around and say "Addition is an operation that's commutative and associative, with a special number 0 such that adding 0 to a number always gives you that number back."  Suddenly you have a more powerful definition of addition that can apply not just to natural numbers, but to the reals, the complex numbers, the finite set of numbers on a clock face, rotations of a two-dimensional object, orderings of a (finite or infinite) list of items and all sorts of other things.  The original objects that were used to define addition -- the natural numbers 0, 1, 2 ... -- are no longer needed.  The new definition works for them, too, of course, but they're no longer essential to the definition.

You can do the same thing with a system like quantum mechanics.  Instead of saying that the behavior of particles is defined by the Schrödinger equation, you can say that quantum particles behave according to such-and-such rules, which are compatible with the Schrödinger equation the same way the more abstract definition of addition in terms of properties is compatible with the natural numbers.

This has been done, or at least attempted, in a few different ways (of course).  The catch is these more abstract systems depend on the notion of a Hilbert Space, which has even more and hairier infinities in it than the real numbers as described above.


How did we get from "there is always one more number" to "more and hairier infinities"?

The question that got us here was "Are we really dealing with an infinity of numbers, or just a finite set of rules for talking about them?"  In some sense, it has to be the latter -- as finite beings, we can only deal with a finite set of rules and try to figure out their consequences.  But that doesn't tell us anything one way or another about what the world is "really" like.

So then the question becomes something more like "Is the behavior of the real world best described by rules that imply things like infinities and limits?"  The best guess right now is "yes", but maybe the jury is still out.  Maybe we can define a more abstract version of quantum physics that doesn't require infinities, in the same way that defining addition doesn't require defining the natural numbers.  Then the question is whether that version is in some way "better" than the usual definition.

It's also possible that, as well-tested as quantum field theory is, there's some discrepancy between it and the real world that's best explained by assuming that the world isn't continuous and therefore the equations to describe it should be based on a discrete number system.  I haven't the foggiest idea how that could happen, but I don't see any fundamental logical reason to rule it out.

For now, however, it looks like the world is best described by differential equations like the Schrödinger equation, which is built on the complex numbers, which in turn are derived from the reals, with all their limits and infinities.  The (provisional) verdict then: the real numbers are real.


* One crude way to see that the rational numbers are countable is to note that there are no more rational numbers than there are pairs of numerator and denominator, each a natural number.    If you can count the pairs of natural numbers, you can count the rational numbers, by leaving out the pairs that have zero as the denominator and the pairs that aren't in lowest terms.  There will still be infinitely many rational numbers, even though you're leaving out an infinite number of (numerator, denominator) pairs, which is just a fun fact of dealing in infinities.  One way to count the pairs of natural numbers is to put them in a grid and count along the diagonals: (0,0), (1,0), (0,1), (2,0), (1,1), (0, 2), (3,0), (2,1), (1,2), (0,3) ... This gets every pair exactly once.

All of this is ignoring negative rational numbers like -5/42 or whatever, but if you like you can weave all those into the list by inserting a pair with a negative numerator after any pair with a non-zero numerator: (0,0), (1,0), (-1,0) (0,1), (2,0), (-2, 0), (1,1), (-1,1) (0, 2), (3,0), (-3, 0) (2,1), (-2, 1), (1,2), (-1,2) (0,3) ... Putting it all together, leaving out the zero denominators and not-in-lowest-terms, you get (0,1), (1,1), (-1, 1),(2,1),(-2,1),(1,2),(-1,2),(3,1),(-3,1),(1,3),(-1,3) ...

Another, much more interesting way of counting the rational numbers is via the Farey Sequence.

Sunday, September 13, 2020

Entropy and time's arrow

When contemplating the mysteries of time ... what is it, why is it how it is, why do remember the past but not the future ... it's seldom long before the second law of thermodynamics comes up.

In technical terms, the second law of thermodynamics states that the entropy of a closed system increases over time.  I've previously discussed what entropy is and isn't.  The short version is that entropy is a measure of uncertainty about the internal details of a system.  This is often shorthanded as "disorder", and that's not totally wrong, but it probably leads to more confusion than understanding.  This may be in part because uncertainty and disorder are both related to the more technical concept of symmetry, which may not mean what you might expect.  At least, I found some of this surprising when I first went over it.

Consider an ice cube melting.  Is a puddle of water more disordered than an ice cube?  One would think.  In an ice cube, each atom is locked into a crystal matrix, each atom in its place.  An atom in the liquid water is bouncing around, bumping into other atoms, held in place enough to keep from flying off into the air but otherwise free to move.

But which of the two is more symmetrical?  If your answer is "the ice cube", you're not alone.  That was my reflexive answer as well, and I expect that it would be for most people.  Actually, it's the water.  Why?  Symmetry is a measure of what you can do to something and still have it look the same.  The actual mathematical definition is, of course, a bit more technical, but that'll do for now.

An irregular lump of coal looks different if you turn it one way or another, so we call it asymmetrical.  A cube looks the same if you turn it 90 degrees in any of six directions, or 180 degrees in any of three directions, so we say it has "rotational symmetry" (and "reflective symmetry" as well).  A perfect sphere looks the same no matter which way you turn it, including, but not limited to, all the ways you can turn a cube and have the cube still look the same.  The sphere is more symmetrical than the cube, which is more symmetrical than the lump of coal.  So far so good.

A mass of water molecules bouncing around in a drop of water looks the same no matter which way you turn it.  It's symmetrical the same way a sphere is.  The crystal matrix of an ice cube only looks the same if you turn it in particular ways.  That is, liquid water is more symmetrical, at the microscopic level, than frozen water.  This is the same as saying we know less about the locations and motions of the individual molecules in liquid water than those in frozen water.  More uncertainty is the same as more entropy.

Geometrical symmetry is not the only thing going on here.  Ice at -100C has lower entropy than ice at -1C, because molecules in the colder ice have less kinetic energy and a narrower distribution of possible kinetic energies (loosely, they're not vibrating as quickly within the crystal matrix and there's less uncertainty about how quickly they're vibrating).  However, if you do see an increase in geometrical symmetry, you are also seeing an increase in uncertainty, which is to say entropy. The difference between cold ice and near-melting ice can also be expressed in terms of symmetry, but a more subtle kind of symmetry.  We'll get to that.


As with the previous post, I've spent more time on a sidebar than I meant to, so I'll try to get to the point by going off on another sidebar, but one more closely related to the real point.

Suppose you have a box with, say, 25 little bins in it arranged in a square grid.  There are five marbles in the box, one in each bin on the diagonal from upper left to lower right.  This arrangement has "180-degree rotational symmetry".  That is, you can rotate it 180 degrees and it will look the same.  If you rotate it 90 degrees, however, it will look clearly different.

Now put a lid on the box, give it a good shake and remove the lid.  The five marbles will have settled into some random assortment of bins (each bin can only hold one marble).  If you look closely, this random arrangement is very likely to be asymmetrical in the same way a lump of coal is: If you turn it 90 degrees, or 180, or reflect it in a mirror, the individual marbles will be in different positions than if you didn't rotate or reflect the box.

However, if you were to take a quick glimpse at the box from a distance, then have someone flip a coin and turn the box 90 degrees if the coin came up heads, then take another quick glimpse, you'd have trouble telling if the box had been turned or not.  You'd have no trouble with the marbles in their original arrangement on the diagonal.  In that sense, the random arrangement is more symmetrical than the original arrangement, just like the microscopic structure of liquid water is more symmetrical than that of ice.

[I went looking for some kind of textbook exposition along the lines of what follows but came up empty, so I'm not really sure where I got it from.  On the one hand, I think it's on solid ground in that there really is an invariant in here, so the math degree has no objections, though I did replace "statistically symmetrical" with "symmetrical" until I figure out what the right term, if any, actually is.

On the other hand, I'm not a physicist, or particularly close to being one, so this may be complete gibberish from a physicist's point of view.  At the very least, any symmetries involved have more to do with things like phase spaces, and "marbles in bins" is something more like "particles in quantum states".]

The magic word to make this all rigorous is "statistical".  That is, if you have a big enough grid and enough marbles and you just measure large-scale statistical properties, and look at distributions of values rather than the actual values, then an arrangement of marbles is more symmetrical if these rough measures measures don't change when you rotate the box (or reflect it, or shuffle the rows or columns, or whatever -- for brevity I'll stick to "rotate" here).

For example, if you count the number of marbles on each diagonal line (wrapping around so that each line has five bins), then for the original all-on-one-diagonal arrangement, there will be a sharp peak: five marbles on the main diagonal, one on each of the diagonals that cross that main diagonal, and zero on the others.  Rotate the box, and that peak moves.  For a random arrangement, the counts will all be more or less the same, both before and after you rotate the box.  A random arrangement is more symmetrical, in this statistical sense.

The important thing here is that there are many more symmetrical arrangements than not.  For example, there are ten wrap-around diagonals in a 5x5 grid (five in each direction) so there are ten ways to put five marbles in that kind of arrangement.  There are 53,130 total ways to put 5 marbles in 25 bins, so there are approximately 5,000 times as many more-symmetrical, that is, higher-entropy, arrangements.  Granted, some of these are still fairly unevenly distributed, for example four marbles on one diagonal and one off it, but even taking that into account, there are many more arrangements that look more or less the same if you rotate the box than there are that look significantly different.

This is a toy example.  If you scale up to, say, the number of molecules in a balloon at room temperature, "many more" becomes "practically all".  Even if the box has 2500 bins in a 50x50 grid, still ridiculously small compared to the trillions of trillions of molecules in a typical system like a balloon, or a vase, or a refrigerator or whatever, the odds that all of the balls line up on a diagonal are less than one in googol (that's ten to the hundredth power, not the search engine company). You can imagine all the molecules in a balloon crowding into one particular region, but for practical purposes it's not going to happen, at least not by chance in a balloon at room temperature.

If you start with the box of marbles in a not-very-symmetrical state and shake it up, you'll almost certainly end up with a more symmetrical state, simply because there are many more ways for that to happen.  Even if you only change one part of the system, say by taking out one marble and putting it back in a random empty bin adjacent to its original position, there are still more cases than not in which the new arrangement is more symmetrical than the old one.

If you continue making more random changes, whether large or small, the state of the box will get more symmetrical over time.  Strictly speaking, this is not an absolute certainty, but for anything we encounter in daily life the numbers are so big that the chances of anything else happening are essentially zero.  This will continue until the system reaches its maximum entropy, at which point large or small random changes will (essentially certainly) leave the system in a state just as symmetrical as it was before.

That's the second law -- as a closed system evolves, its entropy will essentially never decrease, and if it starts in a state of less than maximum entropy, its entropy will essentially always increase until it reaches maximum entropy.


And now to the point.

The second law gives a rigorous way to tell that time is passing.  In a classic example, if you watch a film of a vase falling off a table and shattering on the floor, you can tell instantly if the film is running forward or backward: if you see the pieces of a shattered vase assembling themselves into an intact vase, which then rises up and lands neatly on the table, you know the film is running backwards.  Thus it is said that the second law of thermodynamics gives time its direction.

As compelling as that may seem, there are a couple of problems with this view.  I didn't come up with any of these, of course, but I do find them convincing:

  • The argument is only compelling for part of the film.  In the time between the vase leaving the table and it making contact with the floor, the film looks fine either way.  You either see a vase falling, or you see it rising, presumably having been launched by some mechanism.  Either one is perfectly plausible, while the vase assembling itself from its many pieces is totally implausible.  But the lack of any obvious cue like pottery shards improbably assembling themselves doesn't stop time from passing.
  • If your recording process captured enough data, beyond just the visual image of the vase, you could in principle detect that the entropy of the contents of the room increases slightly if you run the film in one direction and decreases in the other, but that doesn't actually help because entropy can decrease locally without violating the second law.  For example, you can freeze water in a freezer or by leaving it out in the cold.  Its entropy decreases, but that's fine because entropy overall is still increasing, one way or another (for example, a refrigerator produces more entropy by dumping heat into the surrounding environment than it removes in cooling its contents).  If you watch a film of ice melting, there may not be any clear cues to tell you that you're not actually watching a film of ice freezing, running backward.  But time passes regardless of whether entropy is increasing or decreasing in the local environment.
  • Most importantly, though, in an example like a film running, we're only able to say "That film of a vase shattering is running backward" because we ourselves perceive time passing.  We can only say the film is running backward because it's running at all.  By "backward", we really mean "in the other direction from our perception of time".  Likewise, if we measure the entropy of a refrigerator and its contents, we can only say that entropy is increasing as time as we perceive it increases.
In other words, entropy increasing is a way that we can tell time is passing, but it's not the cause of time passing, any more than a mile marker on a road makes your car move.  In the example of the box of marbles, we can only say that the box went from a less symmetrical to more symmetrical state because we can say it was in one state before it was in the other.

If you printed a diagram of each arrangement of marbles on opposite sides of a piece of paper, you'd have two diagrams on a piece of paper.  You couldn't say one was before the other, or that time progressed from one to the other.  You can only say that if the state of the system undergoes random changes over time, then the system will get more symmetrical over time, and in particular the less symmetrical arrangement (almost certainly) won't happen after the more symmetrical one.  That is, entropy will increase.

You could even restate the second law as something like "As a system evolves over time, all state changes allowed by its current state are equally likely" and derive increasing entropy from that (strictly speaking you may have to distinguish identical-looking potential states in order to make "equally likely" work correctly -- the rigorous version of this is the ergodic hypothesis).  This in turn depends on the assumptions that systems have state, and that state changes over time.  Time is a fundamental assumption here, not a by-product.

In short, while you can use the second law to demonstrate that time is passing, you can't appeal to the second law to answer questions like "Why do we remember the past and not the future?"  It just doesn't apply.

Saturday, September 12, 2020

What part of consciousness is social?

I think a lot of questions about consciousness fall into one of two categories:

  • What is it, that is, what features does it have, what states of consciousness are there, what are reasonable tests of whether something is conscious or not (given that we can't directly experience any consciousness but our own)?
  • How does it happen, that is, what causes things (like us, for example) to have conscious experiences?
Reading that over, I'm not sure it really captures the distinction I want to make.  The first item deals in experiments people know how to do right now, and there has been quite a lot of exciting work on the first type of question, falling under rubrics like "cognitive science" and "neural correlates of consciousness".

I mean for the second item to represent "the hard problem of consciousness", the "Why does anyone experience anything at all?" kind of question.  It's not clear whether one can conduct experiments about questions like this at all and, as far as I know, no one has an answer to that isn't ultimately circular.

For example, "We have consciousness because we have a soul" by itself doesn't answer "What is a soul?" and "How does it give us consciousness?" or clearly suggest an experiment that could confirm or refute it.  Instead, it states a defining property (typically among others): A soul is something which gives us consciousness.  The discussion doesn't necessarily end there, but if there's an answer to How does consciousness happen in it, it's not in the mere assertion that souls give us consciousness.

Similarly, if we substitute more mechanistic terms like "quantum indeterminacy" or "chaos of non-linear systems" or whatever else for "soul" in "We have consciousness because ...", we haven't explained why that leads to the subjective experience of consciousness or provided a way to test the assertion.  We may well be able to demonstrate that some aspect or another of consciousness is associated with some structure -- some collection of neurons, one might expect -- where quantum indeterminacy or chaos plays a significant role, but that doesn't explain why that structure correlates with consciousness rather than being just another structure along with the gall bladder, earlobe or whatever else.

If we were able to pinpoint some complex of neural circuits that fire exactly when a person is conscious, or perhaps more realistically, in a particular state of waking consciousness, or consciousness of a particular experience, it would be tempting, then, to say "Aha! We've found the neural circuits that cause consciousness," but that's not really accurate, for a couple of reasons.

First, correlation doesn't imply cause, which is why we speak of neural correlates of consciousness, not causes.  Second, even if there's a good case that the neural pattern we locate really is a cause -- for example, maybe it can be demonstrated that if the pattern is disrupted the person loses consciousness, as opposed to the other way around -- we still don't know what is causing a person to have the subjective experience of consciousness.  We can talk with some confidence about patterns of neurons firing, or even of subjects reporting particular experiences, but we can't speak with confidence about people actually experiencing things.

If we didn't already know that subjective experiences existed (or, at least, I know my subjective experiences exist), there's nothing about the experiment that would tell us that they did, much less why.  All we know is that if neurons are firing in such-and-such a state, the subject reports conscious experiences.

Since we do experience consciousness, it's blindingly obvious to us that the subject must be as well, but again that just shifts the problem back a level: We're convinced that we have found something that causes the subject to experience what we experience, but that doesn't explain why we experience anything to begin with.  If we were all "philosophical zombies" that exhibited all the outward signs of consciousness without actually experiencing it, the experiment would run exactly the same -- except that no one would actually experience it happening.


That's more than I meant to say about the second bullet point.  I actually meant to explore the first one, so let's try that.

Suppose you're hanging out in your hammock on a pleasant afternoon (note to self: how did I let the summer go by without that?).  You hear the wind in the trees, maybe birds chirping or dogs barking or kids playing, or cars going by, or whatever.  You are alone with your own thoughts, but for a while even those die down and you're just ... being.  Are you conscious?  Unless you've actually drifted off to sleep, I think most people would answer yes.  If someone taps you on your shoulder or shouts your name, you'll probably respond, though you might be a bit slow to come back up to speed.  If it starts to rain, you'll feel it.  If something makes a loud noise and you manage to regain your meditative state, you're still liable to remember the noise.

On the other hand, it's something of a different state of consciousness than much of our usual existence.  There's nothing verbal going on.  There's no interaction with other people, none of the constant evaluation  (much of which we're generally not aware of) concerning what people might be thinking, or whether they heard or understood you, or whether you're understanding them, or what their motives might be, or their opinions of you or others around, or what they might be aware of or unaware of.  You're not having an inner conversation with yourself or that jerk who cut you off at the intersection, and there's little to no self consciousness, if you're only focusing on the sensory experience of the moment (indeed, this is a major reason people actively seek such a meditative state).

I've become more and more convinced over time that we often underestimate how conscious other beings are.  I don't subscribe to the sort of literal panpsychism that holds that a brick has a consciousness, that "It is something (to a brick) to be a brick".  I doubt this is a particularly widely held position anyway, so much as the anchor at one end of a spectrum between it and "nothing is actually conscious at all".  However, I am open to the idea that anything with a certain minimum complement of capabilities which can be measured fairly objectively, including particularly senses and memory, has some sort of consciousness, and, as a corollary, that there are many different kinds or components of consciousness that different things have at different times.

For example, a hawk circling over a field waiting for a mouse to pop out of its burrow likely has some sort of experience of doing this, and if it spots a mouse, it has some sort of awareness of there now being prey to pursue with the goal of eating it or, if there are no mice, an awareness of being hungry.  This wouldn't be awareness on a verbal, reflective level we experience when we notice we are hungry and tell someone about it, but something more akin to that "I'm relaxing in a hammock and things are just happening" kind of awareness.  I also wouldn't claim that this awareness is serving any particular purpose.  Rather, it's a side effect of having the sort of mental circuitry a hawk has and being embodied in a universe where time exists -- another mystery that may well be deeply connected to the hard problem of consciousness.

I think this is in some sense the simplest hypothesis, given that we have the same general kind of neural machinery as hawks and that we can experience things happening.  It still presupposes that there's some sort of structural difference between things with at least some subjective experiences and things with no such experiences at all, but that "something" becomes a fairly general and widely-shared capacity for sensing the world and retaining some memory of it rather than a specialized facility unique to us.  The difference between us and a hawk is not that we're conscious and hawks aren't, but that we have a different set of experiences from hawks.  For the most part this would be a larger set of experiences, but, if you buy the premise of hawks having experiences at all, there are almost certainly some that they have but we don't.


Which leads me back to the title of this post.

I suspect that if you polled a bunch of people about consciousness in other animals, you'd see more "yes" answers to "is a chimpanzee conscious" or "is a dog conscious" than to "is a hawk conscious" or "is a salmon conscious".  Some of this is probably due to our concept of intelligence in other animals.  Most people probably think that chimps and dogs are "smart animals", while hawks and salmon are "just regular animals".

However, I think our judgment of that is strongly colored by chimps and dogs being more social animals than hawks or fish (even fish that school are probably not social in the same way we are -- I'd go into why I think that, but this post is already running a bit long).  It doesn't take much observation of chimps and dogs interacting with their own species and with humans to conclude that they have some awareness of individual identities and social structure, the ability to persuade others to do what they want (or at least try), and other aspects of behavior that are geared specifically toward interaction with those around them.  Other animals do interact with each other, but social animals like chimps, dogs and humans normally do so on a daily basis as a central part of life.

This social orientation produces its own set of experiences beyond "things are happening in the physical world" experiences like hunger and an awareness that some potential food just popped out of a burrow.  I think it's this particular kind of experience that we tend to gravitate toward when we think of conscious experience.  More specifically, self-awareness is often held out as the hallmark of "true consciousness", and I think there's a good case that self-awareness is closely connected to the sort of "what is that one over there thinking and what do they want" calculation that comes of living as a social animal.

To some extent this is a matter of definition.  If you define consciousness as self-awareness, then it's probably relatively rare, even if several species are able to pass tests like the mirror test (Can the subject tell that the animal in the mirror is itself?).  However, if you define consciousness as the ability to have subjective experiences, then I think it's hard to argue that it's not widespread.  In that formulation, self-awareness is a particular kind of subjective experience limited to relatively few kinds of being, but only one kind of experience among many.