Friday, September 18, 2015

Counting on Freeway Rick

First, a disclaimer:  There are obviously a number of hot-button issues relating to the story of former crack cocaine distributor Ricky "Freeway Rick" Ross.  This blog is not about such issues.  I'm not going to touch on them here and (not that I expect this to be an issue) I'll summarily delete comments if necessary.  Hey, it's my blog.  If you're wondering "What hot-button issues?", feel free to chase the Wikipedia link above.

With that out of the way ...

In a post on counting (see section 3), I talked about the problem of counting how many crimes a person had committed.  This might seem like a fairly academic question, but the case of Ricky Ross illustrates how it can be anything but.  Ross was originally sentenced to life without parole for buying 100 kilos of cocaine from an informant.  Since Ross had previously been convicted of two drug-related felonies, this put him afoul of California's "three strikes" law, triggering its mandatory sentencing.

Except ...

That sentencing was later overturned on the basis that Ross's previous two convictions -- in different states -- should have been counted as a single conviction as they related to a single conspiracy (or "continuous criminal spree").  That meant that the conviction for selling to the informant was only Ross's second strike, and the sentence was accordingly reduced to 20 years.  Ross was released in 2009, with time off for good behavior.

But there's more.

After his release, Ross became embroiled in a lawsuit against William "Rick Ross" Roberts, a prison guard turned gangsta rapper who had sold millions of records under the name "Rick Ross".  Freeway Rick lost the suit, but one of the legal issues that came up was whether each  of Roberts' "Rick Ross" albums should count as a separate claim, or whether the "single publication" rule applied.

Whatever else one might think about Rick Ross's legal history, it's clear from it that what might seem like a simple matter of counting can be tricky and serious business.

Sunday, September 13, 2015

More on the invisible oceans, and their roundness

Previously, in speculating about what it would take for an inhabitant of one of the several subsurface oceans thought to exist in our solar system to discover that their world was round, I said
Figuring out that the world is round would be a significant accomplishment.  The major cues the Greeks used -- ships sinking below the horizon, lunar eclipses, the position of the noontime sun at different latitudes -- would not be available.  The most obvious route left is to actually circumnavigate the world.  And figure out that you did it.
Fortunately, I was smart enough to leave myself some wiggle room: "I'm very reluctant to say 'such and such would be impossible because ...'"  After all, humanity is in a similar situation in trying to figure out the shape of our universe, though in our case circumnavigating doesn't seem to be even remotely close to an option.  Even so, we've had some apparent success.

On further reflection, there are at least two ways an intelligent species living in a world like Ganymede's subsurface ocean could figure out that the world is round.

First, and perhaps most likely, it's not necessary for a Ganymedean to circumnavigate the world, if something can.  In this case that something would be sound.  Under the right conditions, sound can travel thousands of kilometers underwater.  The circumference of Ganymede is around 10,000km, which may be too far.  Europa's is around 6,000km, which as I understand it is on the edge of what experiments in Earth's oceans have been able to detect (pretty impressive, that).

We've already postulated that sound would be one of the more important senses in such a world.  It doesn't seem impossible that someone would notice that especially loud noises tended to be followed a couple of hours later by similar noises from all directions.  This assumes that the the oceans are unobstructed, but if they're not, you have landmarks to measure by, which would make the original "circumnavigate and tell that you did it" less of a challenge.

Second, if some sort of light-production and light-detection evolve, then one of the cues the Greeks used is indeed available, at least in theory: one can see things disappear over the horizon.  To actually make use of this one would need to be able to see far enough to tell that the object was disappearing due to curvature and not behind some obstacle, or simply because it was too far away to see.  The exact details depend on how smooth the inner surface is.

Humans noticed the effect with ships because a calm sea is quite flat, that is to say, quite close to perfectly round.  If the inner surface of the ocean is rough, one might have to float at a considerable distance from it, and thus wait for the object to recede a considerable distance, to be sure of the effect.  On the other hand, floating a considerable distance from the inner surface would be much easier than floating the same distance from the surface of the earth.  For that matter, it would also be possible to note what's visible and what's not at different distances from the inner surface.

A little back-of-the-envelope estimation suggests that one would have to be able to see objects kilometers or tens of kilometers away.  By way of comparison, Earth's oceans are quite dark at a depth of one kilometer, so this seems like a longshot.  Nor does it help that it's possible to hear long distances, since sound doesn't necessarily propagate in a straight line (neither does light, but that's a different can of worms).


As I originally disclaimed, it's not a good idea to rule something out as impossible just because you can't think of a way to do it.  The inhabitants of a subsurface ocean would have thousands, if not millions, of years to figure things out, even if they wouldn't have the advantage of already knowing approximately what their world looks like.

Wednesday, August 26, 2015

Grammars ... not so fast

In the previous post, I made the rather bold statement that the effort to describe natural languages with formal grammars had been "without great success".  I probably could have put that better.

I was referring at the time to the effort to define a "universal grammar" that would capture the essence of linguistic competence, that is, people's knowledge of language, independent of performance, that is, what people actually say.  A grammar, in that sense, is a set of rules that will produce all, and only, the sentences that speakers of a given language know how to produce.

In that sense, the search for grammars for natural languages has not had great success.


However, there is another line of inquiry that has had notable success, namely the effort to parse natural language, that is, to take something that someone actually said or wrote and give an account of how the words relate to each other that matches well with what actual people would say about the same sentence.

For example, in Colorless green ideas sleep furiously, we would say that colorless and green are describing ideas, ideas are what's sleeping, and furiously modifies sleep.  And so would any number of parsers that have been written over the years.

This is an important step towards actually understanding language, whatever that may mean, and it indicates that, despite what I said previously, there may be such a thing as syntax independent of semantics.  We can generally handle a sentence like Colorless green ideas sleep furiously the same as Big roan horses run quickly, without worrying about what it might mean for an idea to be both green and colorless or for it to sleep.


So what gives?  There are at least two major differences between the search for a universal grammar and the work on parsers.

First, the two efforts have significantly different goals.  It's not entirely clear just what a universal grammar might be, which was one of the main points of the previous post, not to mention many more knowledgeable critiques.  If universal grammar is a theory, what facts is it being asked to explain?

As far as I understand it, the idea is to explain why people say and comprehend what they do, by producing a grammar that enumerates what sentences they are and aren't competent to say, keeping in mind that people may make mistakes.

However, narrowing this down to "all and only" the "right" sentences for a given language has proved difficult, particularly since it's hard to say where the line between "competence" and "performance" lies.  If someone says or understands something that a proposed universal grammar doesn't generate, that is, something outside their supposed "competence", what does that mean?  If competence is our knowledge of language, then is the person somehow saying something they somehow don't know how to say?

The work on parsing has very well-defined goals.  Typically, algorithms are evaluated by running a large corpus of text through them and comparing the results to what people came up with.  As always, one can argue with the methodology, and in particular there is a danger of "teaching to the test", but at least it's very clear what a parser is supposed to do and whether a particular parser is doing it.  Research in parsing is not trying to characterize "knowledge", but rather to replicate particular behavior, in this case how people will mark up a given set of sentences.

Second, the two efforts tend to use different tools.  Much work on universal grammar has focused on phrase-structure grammars, transformational grammars and in general (to the small extent I understand the "minimalist program") on nested structures: A sentence consists of a verb part and noun parts, a verb part includes a main verb and modifying clauses, which may in turn consist of smaller parts, etc..

While there are certainly parsing approaches based on this work, including some fairly successful ones, several successful natural language parsers are based on dependency grammars, which focus on the relations among words rather than a nesting of sentence parts.  Where a phrase-structure grammar would say that colorless green ideas is a noun phrase and so forth, a dependency grammar would say that colorless and green depend on ideas (in the role of adjectives).  Dependency grammars can trace their roots back to some of the earliest known work in grammar hundreds of years ago, but for whatever reason they seemed to fall out of favor for much of the late 20th century.

Leaving aside some interesting questions about issues like "non-projective dependencies" (where the lines from the dependent words to the words they depend on cross when the sentence is laid out in its written or spoken order), it's easy to build parsers for dependency grammars using basically the same technique (shift-reduce parsing) as compilers for computer languages use.  These parsers tend to be quite fast, and about as accurate as parsers based on phrase-structure grammars.


In short, there is a lot of interesting and successful work going on concerning the syntax of natural languages, just not in the direction (universal grammar and such) that I was referring to in the previous post.

Tuesday, August 18, 2015

What, if anything, is a grammar?

This is going to be one of the more technical posts.  As always, I hope it will be worth wading through the details.

There are hundreds, if not thousands, of computer languages.  You may have heard of ones like Java, C#, C, C++, ECMAScript (aka JavaScript, sort of), XML, HTML (and various other *ML, not to mention ML, which is totally different), Scheme, Python, Ruby, LISP, COBOL, FORTRAN etc., but there are many, many others.  They all have one thing in common: they all have precise grammars in a certain technical sense: A set of rules for generating all, and only, the legal programs in the given language.

This is true even if no one ever wrote the grammar down.  If you can actually use a language, there is an implementation that takes code and tells a computer to follow its instructions.  That implementation will accept some set of inputs and reject some other set of inputs, meaning it encodes the grammar of the language.

Fine print, because I can't help myself: I claim that if you don't have at least a formal grammar or an implementation, you don't really have a language.  You could define a language that, say, accepts all possible inputs, even if it doesn't do anything useful with them.  I'd go math major on you then and claim that the set of "other code" it rejects is empty, but a set nonetheless.  I'd also argue that if two different compilers are nominally for the same language but behave differently, not that that would ever happen, you have two variants of the same language, or technically,  two different grammars.

Formal grammars are great in the world of programming languages.  They tell implementers what to implement.  They tell programmers what's legal and what's not, so you don't get into (as many) fights with the compiler.  Writing a formal grammar helps designers flush out issues that they might not have thought of.  There are dissertations written on programming language grammars.  There are even tools that can take a grammar and produce a "parser", which is code that will tell you if a particular piece of text is legal, and if it is, how it breaks down into the parts the grammar specifies.

Here's a classic example of a formal grammar:
  • expr ::= term | term '+' term
  • term ::= factor | factor '*' factor
  • factor ::= variable | number | '(' expr ')'
In English, this would be
  • an expression is either a term by itself or a term, a plus sign, and another term
  • a term is either a factor by itself or a factor, a multiplication sign ('*') and another factor
  • a factor is either a variable, a number or an expression in parentheses
Strictly speaking, you'd have to give precise definitions of variable and number, but you get the idea.

Let's look at the expression (x + 4)*(6*y + 3).  Applying the grammar, we get
  • the whole thing is a term
  • that term is the product of two factors, namely (x + 4) and (6*y + 3)
  • the first factor is an expression, namely x+4, in parentheses
  • that expression is the sum of two terms, namely x and 4
  • x is a factor by itself, namely the variable x
  • 4 is a factor by itself, namely the number 4
  • 6*y + 3 is the sum of two terms, namely 6*y and 3
  • and so forth
One thing worth noting is that, because terms are defined as the product of factors, 6*y + 3 is automatically interpreted the way it should be, with the multiplication happening first.  If the first two rules had been switched, it would have been interpreted as having the addition happening first.  This sort of thing makes grammars particularly useful for designing computer languages, as they give precise rules for resolving ambiguities.  For the same reason they also make it slightly tricky to make sure they're defining what you think they're defining, but there are known tools and techniques for dealing with that, at least when it comes to computer languages.


The grammar I gave above is technically a particular kind of grammar, namely a context-free grammar, which is a particular kind of phrase structure grammar.  There are several other kind.  One way to break them down is the Chomsky hierarchy, which defines a series of types of grammar, each more powerful than the last in the sense that it can recognize more kinds of legal input.  A context-free grammar is somewhere in the middle.  It can define most of the rules for real programming languages, but typically you'll need to add rules like "you have to say what kind of variable x is before you can use it" that won't fit in the grammar itself.

At the top of the Chomsky hierarchy is a kind of grammar that is as powerful as a Turing machine, meaning that if you can't specify it with such a grammar, no computer can compute whether or not a given sentence is legal according to the grammar or not.  From a computing/AI point of view (and even from a theoretical point of view), it sure would be nice if real languages could be defined by grammars somewhere in the Chomsky hierarchy.  As a result, decades of research have been put into finding such grammars for natural languages.

Without great success.

At first glance, it seems like a context-free grammar, or something like it, would be great for the job.  The formal grammars in the Chomsky hierarchy are basically more rigorous versions of the grammars that were taught, literally, in grammar schools for centuries.  Here's a sketch of a simple formal grammar for English:
  • S ::= NP VP (a sentence is a noun phrase followed by a verb phrase)
  • NP ::= N | Adj NP (a noun phrase is a noun or an adjective followed by a noun phrase)
  • VP ::= V | VP Adv (a verb phrase is a verb or a verb phrase followed by an adverb)
Put this together with a list of nouns, verbs, adjectives and adverbs, and voila: English sentences.  For example, if ideas is a noun, colorless and green are adjectives, sleep is a verb and furiously is an adverb, then we can analyze Colorless green ideas sleep furiously with this grammar just like we could analyze (x + 4)*(6*y + 3) above. 

Granted, this isn't a very good grammar as it stands.  It doesn't know how to say A colorless green idea sleeps furiously or I think that colorless green ideas sleep furiously or any of a number of other perfectly good sentences, but it's not hard to imagine adding rules to cover cases like these.

Likewise, there are already well-studied notions of subject-verb agreement, dependent clauses and such.  Translating them into formal rules doesn't seem like too big a step.  If we can just add all the rules, we should be able to build a grammar that will describe "all and only" the grammatical sentences in English, or whatever other language we're analyzing.

Except, what do we mean by "grammatical"?

In the world of formal grammar, "grammatical" has a clear meaning: If the rules will generate a sentence, it's grammatical.  Otherwise it's not.

When it comes to people actually using language, however, things aren't quite so clear.  What's OK and what's not depends on context, whom you're talking to, who's talking and all manner of other factors.  Since people use language to communicate in the real world, over noisy channels and often with some uncertainty over what the speaker is trying to say and particularly over how the listener will interpret it, it's not surprising that we allow a certain amount of leeway.

You may have been taught that constructs like ain't gonna and it don't are ungrammatical, and you may not say them yourself, but if someone says them to you, you'll still know what they mean.  If a foreign-born speaker says something that makes more sense in their language than yours, something like I will now wash myself the hands, you can still be pretty certain what they're trying to say.  Even if someone says something that seems like nonsense, say That's a really call three, but there's a lot of noise, you'll probably assume they meant something else, maybe That's a really tall tree, if they're pointing at a tall tree.

In short, there probably isn't any such thing as "grammatical" vs. "ungrammatical" in practical use, particularly once you get away from the notion that "grammatical" means "like I was taught in school" -- and as far as that goes, different schools teach different rules and there is plenty of room for interpretation in any particular set of rules.  To capture such uncertainty, linguistics recognizes a distinction between competence (one's knowledge of language) and performance (how people actually use language), though not all linguists recognize a sharp distinction between the two.

When there is variation among individuals and over time, science tends to take a statistical view.  It's often possible to say very precisely what's going on statistically even if it's impossible to tell what will happen in any particular instance.  When there is communication in the presence of noise and uncertainty, information theory can be a useful tool.  It doesn't seem outrageous that a grammar aimed at describing real language would take a statistical approach.  Whatever approach it takes, it should at least provide a general account of what the analysis meant in information theoretic terms.

[I didn't elaborate on this in the original post, but as I understand it, Chomsky has strongly disputed both of these notions.  Peter Norvig quotes Chomsky as saying "But it must be recognized that the notion of probability of a sentence is an entirely useless one, under any known interpretation of this term."  This is certainly forceful, but not convincing, even after following the link to the original context of the quote, which contains such interesting assertions as "On empirical grounds, the probability of my producing some given sentence of English [...] is indistinguishable from the probability of my producing some given sentence of Japanese.  Introduction of the notion of "probability relative to a situation" changes nothing."  This must be missing something, somewhere.  If I'm a native English speaker and my hair is on fire, it's much more likely that I'm about to say "My hair is on fire!" than "閑けさや 岩にしみいる 蝉の声" or even "My, what lovely weather we're having".

The entire article by Norvig is well worth reading.  But I digress.]

Even if you buy into the idea of precise, hard-and-fast rules describing a language, there are several well-known reasons to think that there is more to the picture than what formal grammars describe.  For example, consider these two sentences:
  • The horse raced past the barn fell.
  • The letter sent to the president disappeared.
Formally, these sentences are essentially identical, but chances are you had more trouble understanding the first one, because raced can be either transitive, as in I raced the horse past the barn (in the sense of "I was riding the horse in a race"), or intransitive, as in The horse raced past the barn (the horse was moving quickly as it passed the barn -- there's also "I was in a race against the horse", but never mind).  On reading The horse raced, it's natural to think of the second sense, but the whole sentence only makes sense if you use the first sense of raced, but in passive voice.

Sentences like The horse raced past the barn fell are called "garden path" sentences, as they "lead you down the garden path" to an incorrect analysis and you then have to backtrack to figure them out.

A purely formal grammar describes only the structure of sentences, not any mechanism for parsing them, that is, recovering the structure of the sentence from its words.  There's good experimental data, though, to suggest that we parse The horse raced past the barn fell differently from The letter sent to the president disappeared.

A purely formal grammar doesn't place any practical limits on the sentence that it generates.  From a formal point of view,  The man whom the woman whom the dog that the cat that walked by the fence that was painted blue scratched barked at waved to was tall is a perfectly good sentence, and would still be a perfectly good sentence if we replaced the fence that was painted blue with the fence that went around the yard that surrounded the house that was painted blue, giving The man whom the woman whom the dog that the cat that walked by the fence that went around the yard that surrounded the house that was painted blue scratched barked at waved to was tall.

You could indeed diagram such a sentence, breaking it down into its parts, but I doubt very many people would say they understood it if it were spoken to them at a normal rate.  If you asked "Was it the house or the fence that was painted blue?", you might not get an answer better than guessing (I'm sure such experiments have been done, but I don't have any data handy).  Again, the mechanism used for understanding the sentence places limits on what people will say in real life.

In fact, while most languages, if not all, allow for dependent clauses like that was painted blue, they're used relatively rarely.  Nested clauses like the previous example are even rarer.  I recall recently standing in a fast food restaurant looking at the menu, signs, ads, warnings and so forth and realizing that a large portion of what I saw wasn't even composed of complete sentences, much less complex sentences with dependent clauses: Free side of fries with milkshake ... no climbing outside of play structure ... now hiring.  Only when I looked at the fine-print signs on the drink machine and elsewhere did I see mostly "grammatical" text, and even that was in legalese and clearly meant to be easily ignored.

In short, there are a number of practical difficulties in applying the concepts of formal grammar to actual language.  In the broadest sense, there isn't any theoretical difficulty.  Formal grammars can describe anything computable, and there's no reason to believe that natural language processing is uncomputable, even if we haven't had great success with this or that particular approach.  The problems come in when you try to apply that theory to actual language use.


Traditionally, the study of linguistics has been split into several sub-disciplines, including
  • phonology, the study of how we use sounds in language, for example why we variously use the sounds -s, -z and -ez to represent plurals and similar endings in English (e.g., catsbeds and dishes)
  • morphology, the study of the forms of words, for example how words is composed of word and a plural marker -S.
  • syntax, the study of how words fit together in sentences.  Formal grammars are a tool for analyzing syntax
  • semantics, the study of how the meaning of a sentence can be derived from its syntax and morphology
  • pragmatics, the study of how language is actually used -- are we telling someone to do something? asking for information? announcing our presence? shouting curses at the universe?
It's not a given that actual language respects these boundaries.  Fundamentally, we use language pragmatically, sometimes to convey fine shades of meaning but sometimes in very basic ways.  We tolerate significant variation in syntax and morphology.  It's somewhat remarkable that we recognize a distinction between words and parts of words at all.

In fact, it's somewhat remarkable that we recognize discrete words.  On a sonogram of normal speech, it's not at all obvious where words start and end.  Clearly our speech-processing system knows something that our visual system -- which is itself capable of many remarkable things -- can't pick out of a sonogram.

Nonetheless, we can in fact recognize words and parts of words.  We know that walk, walks, walking and walked are all forms of the same word and that -s, -ing, and -ed are word-parts that we can use on other words.  If someone says to you I just merped, you can easily reply What do you mean, "merp"? What's merping?  Who merps?

This ability to break down flowing speech into words and words (spoken or written) into parts appears to be universal across languages, so it's reasonable to say "Morphology is a thing" and take it as a starting point.  Syntax, then, is the study of how we put morphological pieces together, and a grammar is a set of rules for describing that process.

The trickier question is where to stop.  Consider these two sentences:
  • I eat pasta with cheese
  • I eat pasta with a fork
Certainly these mean different things.  In the first sentence, with modifies pasta, as in Do they have pasta with cheese on the menu here? or It was the pasta with cheese that set the tone for the evening.  In the second sentence, with modifies eat, as in I know how to eat with a fork or Eating with a fork is a true sign of sophistication.  The only difference is what follows with.  The indefinite article a doesn't provide any real clue.  You can also say I eat pasta with a fine Chianti.

The real difference is semantic, and you don't know which case you have until the very last word of the sentence.  A fork is an instrument used for eating.  Cheese isn't.  If the job of syntax is to say what words relate to what, it would appear that it needs some knowledge of semantics to do its job.

There are several possible ways around problems like this:
  • Introduce new categories of word to capture whatever semantic information is needed in understanding the syntax.  Along with being a noun, fork can also be an instrument or even an eating instrument.  We can then add syntactic rules to the effect that with <instrument> modifies verbs while with <non-instrument> modifies nouns.  Unfortunately, people don't seem to respect these categories very well.  If someone has just described a method of folding a slice of cheese and using it to scoop pasta, then cheese is an eating instrument.  If we're describing someone with bizarre eating habits, a fork could be just one more topping for one's pasta.
  • Pick one or the other option as the syntactic structure, and leave it up to semantics to do with it as it will.  In this case:
    • Say that, syntactically, with modifies pasta, or more generally the noun, but that this analysis may be modified by the semantic interpretation -- if with is associated with an instrument, it's really modifying the verb.
    • Say that, syntactically, with modifies eat, or more generally the verb, and interpret eat with cheese as meaning something like "the pasta is covered with cheese when you eat it".  This may not be as weird as it sounds.  What else do you eat with cheese? is a perfectly good question, though you can argue that in this case the object of eat is what, and with cheese modifies what.
  • Make syntax indeterminate.  Instead of saying that with ... modifies eat or pasta, say that  syntactically it modifies "eat or pasta, depending on the semantic interpretation".  This provides a somewhat cleaner separation between syntax and semantics, since the semantics only has to decide what's being modified without knowing the exact structure of the sentence it was in.
  • Say that with modifies eat pasta as a unit, and leave it up to semantics to slice things more finely.  This is much the same as the previous option.  Technically it gives a definite syntactic interpretation, but only by punting on the seemingly syntactic question of what modifies what.
  • Say that the distinction between syntax and semantics is artificial, and in real life we use both kinds of structure -- how words are arranged and what they mean -- simultaneously to make sense of language.  And while we're at it, the distinction between semantics and pragmatics may turn out not to be that sharp either.
There are two reasons one might want to separate syntax from semantics (and likewise for the other distinctions):
  • They're fundamentally separate.  Perhaps one portion of the brain's speech circuitry handles syntax and another semantics.
  • On the other hand, separating the two may just be a convenient way of describing the world.  One set of phenomena is more easily described with tools like grammars, while another set needs some other sort of tools.  Perhaps they're all aspects of the same thing deep down, but this is the best explanation we have so far.
As much as we've learned about language, it's striking, though not necessarily surprising, that fundamental questions like "What is the proper role of syntax as opposed to semantics?" remain open.  However, it seems hard to answer the question "What, if anything, is a grammar" without answering the deeper question.

[If you're finding yourself asking "But what about all the work being done in parsing natural languages?" or "But what about dependency grammars?", please see this followup]

Thursday, July 2, 2015

Do androids trip on electric acid?

Have a look at some of these images and take note of whatever adjectives come to mind.  If other people's responses are anything to go by, there's a good chance they include some or all of "surreal", "disturbing", "dreamlike", "nightmarish" or "trippy".  Particularly "trippy".

These aren't the first computer-generated images to inspire such descriptions.  Notably, fractal images have been described in psychedelic terms at least since the Mandelbrot set came to general attention, and the newer, three-dimensional varieties seem particularly evocative.  The neural-network generated images, however, are in a different league.  What's going on?

Real neural systems appear to be rife with feedback loops.  In experiments with in vitro neuron cultures -- nerve cells growing in dishes with electrodes attached here and there -- a system with no other input to deal with will amplify and filter whatever random noise there is (and there is always something) into a clear signal.  This would be a "signal" in the information theory sense of something that's highly unlikely to occur by chance, not a signal in the sense of something conveying a particular meaning.

This distinction between the two senses of "signal" is important.  Typically a signal in the information theory sense is also meaningful in some way.  That's more or less why they call it "information theory".  There are plenty of counterexamples, though.  For example:
  • tinnitus (ringing of the ears), where the auditory system fills in a frequency that the ear itself isn't able to produce
  • pareidolia, where one sees images of objects in random patterns, such as faces in clouds
  • the gambler's fallacy, where one expects a random process to remember what it has already done and compensate for it ("I've lost the last three hands.  I'm bound to get good cards now.")
and so forth.  The common thread is that part of the brain is expecting to perceive something -- a sound, a face, a balanced pattern of "good" and "bad" outcomes -- and selectively processes otherwise meaningless input to produce that perception.

In the generated images, a neural network is first trained to recognize a particular kind of image -- buildings, eyes, trees, whatever -- and the input image is adjusted bit by bit to strengthen the signal to the recognizer.  The code doing the adjustment knows nothing about what the recognizer expects.  It just tries something, and if the recognizer gives it a stronger signal as a result, it keeps the adjustment.  If you start with random noise, you end up with the kind of images you were looking for.  If you start with non-random input, you get a weird mashup of what you had and what you were looking for.

Our brains almost certainly have this sort of feedback loop built in.  Real input often provides noisy and ambiguous signals.  Is that a predator behind those bushes, or just a fallen branch?  Up to a point it's safer to provide a false positive ("predator" when it's really a branch) than a false negative ("branch", when it's really a predator), so if a predator-recognizer feeds "yeah, that might be a four-legged furry thing with teeth" back to the visual system in order to strengthen the signal, survival chances should be better than with a brain that doesn't do that.  A difference in survival chances is exactly what natural selection needs to do its work.

At some point, though, too many false positives mean wasting energy, and probably courting other dangers, by jumping at every shadow.  Where that point is will vary depending on all sorts of things.  In practice, there will be a sliding scale from "too complacent" to "too paranoid", with no preset "right" amount of caution.  Given that chemistry is a vital part of the nervous system's operation, it's not surprising that various chemicals could move such settings.  If the change is in a useful direction, we call such chemicals "medicine".  Otherwise we call them "drugs".

In other words -- and I'm no expert here -- it seems plausible that we call the images trippy because they are trippy, in the sense that the neural networks that produced them are hallucinating in a manner similar to an actual brain hallucinating.  Clearly, there's more going on than that, but this is an interesting result.


When testing software, it's important to look at more than just the "happy" path.  If you're testing code that divides numbers, you should see what it does when you ask it to divide by zero.  If you're testing code that handles addresses and phone numbers, you should see what it does when you give it something that's not a phone number.  Maybe you should feed it some random gibberish (noting exactly what that gibberish was, for future reference), and see what happens.

Testing models of perception (or of anything else), seems similar.  It's nice if your neural network for recognizing trees can say that a picture of a tree is a picture of a tree.  It's good, maybe good enough for the task at hand, if it's also good at not calling telephone poles or corn stalks trees.  But if you're not just trying to recognize pictures, and you're actually trying to model how brains work in general, it's very interesting if your model shows the same kind of failure modes as an actual brain.  A neural network that can hallucinate convincingly might just be on to something.

Saturday, May 2, 2015

The invisible oceans

Life as we know it needs water.  Two conclusions that don't follow from that premise: Life needs water, and where there is water, there must be life.  Nonetheless, in our present ignorance, the best bet for finding other life is to find water.  Liquid water, that is.

Over the past few years there has been a steady stream of discoveries of likely liquid water in our solar system.  Jupiter's moons Ganymede and Europa probably each have more than Earth does, even though both are considerably smaller than Earth.  Enceladus (a small moon of Saturn) probably contains significant amounts.  The planet asteroid dwarf planet Ceres also shows possible signs of an ocean.  Pluto and Charon might also.  We should know more about them in a few months as I write this.  In the case of Pluto and the moons, tidal flexing generates the heat that keeps the water liquid.  The case of Ceres is less clear, so from here on I'll restrict discussion to the moons.

What these masses of water all have in common, of course, is that they lie deep beneath the surface.  Tens of kilometers, at least.  Some, at least are also salty enough to conduct electricity well, as evidenced by their interaction with magnetic fields.  It's not clear what kind of pressure they are under, or what their range of temperatures is, except that they are likely sandwiched between layers of ice, or maybe between ice and rock, or maybe in alternating layers of various forms of ice (yes, there is more than one kind).  They will thus be literally ice cold in at least some regions, though they are probably considerably warmer in other places.  They certainly receive no sunlight.

Could life evolve under such conditions?  Who knows?  Personally I'd be reluctant to rule it out.  We've found life in all kinds of unlikely habitats on Earth, and in any case we just don't know that much about how life develops.  So suppose it did develop on one of these moons.  What would it be like, and what would be our chances of encountering it?


The first question is whether there is enough in these oceans to get microbial life going.  The short answer: no idea.  We don't know very much at all about the chemical composition of these oceans, though we can make some informed guesses, and again we don't know very much at all about how microbial life on Earth developed, though we can make some informed guesses about that, too.

It's pretty clear that there will be various impurities in the ocean water, so there might well be potential for some sort of self-replicating, information-carrying polymer, similar to RNA, to develop.  There is an outside source of heat from tidal flexing, and there will be temperature gradients as a result [perhaps as much as 40K over a few hundred kilometers], so the laws of thermodynamics don't rule anything out.  Let's assume that microbial life of some sort can develop.

The path from single-cell organisms to colonies of single-cell organisms to colonies of single-cell organisms with different forms (but the same genetics) to something we may as well call a multicellular organism is reasonably clear, although at least in the case of life on Earth it seems to have taken a good long time to develop.

Since we're totally speculating, let's assume there are multicellular organisms analogous to our own sea creatures, but possibly very different in form.  Again, in reality there might be nothing at all, or only single-celled organisms, or some sort of microbe without clearly defined cells at all, or who knows what.

What kinds of features might these critters have?
  • Some sort of chemical sense analogous to smell or taste seems like a good bet.  It's almost a defining property of life that it will respond to chemical stimuli in some way or another.  Microbes do.  Animals of all sizes do.  Even plants and fungi do.  I wouldn't necessarily call a slime mold constituent detecting another's chemical signal "taste" or "smell" but ... some sort of chemical sense.
  • For similar reasons, a temperature sense seems likely.
  • Food webs, predator/prey relationships, mutualism, parasitism, commensalism, amensalism and any number of other relationships among organisms are pretty much inevitable when there is more than one kind of organism.
  • Some way of shuffling genetic material as with sexual reproduction.  A source of variability beyond random mutation can be useful in surviving long-term in harsh, ever-changing environments.  Even microbes do this to at least some extent.
  • Various ways of physically manipulating objects ... tentacles, pincers, pseudopods, maybe even something resembling hands
  • Some sort of nervous system for communicating signals, including sensations from the senses, from one part of the body to another
  • Sociality in at least some species.  We're assuming there are multicellular organisms, which is not so different from sociality at the cell level.  This is the next level: sociality among multicellular organisms.  Sociality requires some means of communication between organisms.  Chemical signals and sound seem like plausible candidates.
So far we have a world comprising organisms of various sizes and forms, some herding/schooling together, some hunting others, with the ability to grab things and move them around ... much like life as we know it, except quite likely totally different.

But then, we assumed many important premises based on experience with life as we know it, particularly cells, and we assumed that evolution would follow essentially the same rules as here.  The second assumption, at least, seems pretty reasonable, but who knows?


It's natural to think such an ocean would be dark and all its inhabitants blind, but maybe not.  Plenty of deep-sea creatures find it useful to have eyes and even to produce light.  Light-producing molecules are not that complex.  There are ones that require only carbon, hydrogen and oxygen, so we don't need to assume phosphorous or sulfur, just some source of carbon in the water, which is probably inevitable.

The evolutionary pathway to light and sight is not so clear, though.  On Earth there is a source of light independent of life and it's not surprising that eyes would evolve in the sunlit portion of the ocean, at least.  In naturally pitch-dark oceans, there would have to be some source of light, as a side-effect of something else, before light-detecting cells and organs become useful.

So let's assume it's dark.  Being visual animals, we might assume that that's a big deal, but maybe not.  There are plenty of other ways to get a good picture of the world without seeing it.  In particular, hearing will be important.

Sound carries well in water, and given that the whole world is constantly flexing, there ought to be at least some natural sources of sound.  Unlike the case of vision, it's almost inevitable that organisms that are reasonably large and able to move and to move things will end up making some noise doing so.  This all suggests it would useful to be able to hear.  If it's useful to hear, it becomes useful to be able to make sounds, both for communication and possibly for active sonar.

So we have a dark, noisy and probably fairly cold place teeming with organisms of various shapes and sizes, and schools of this and that all trying to eat and not be eaten.  Will we ever see it?  All we'd have to do to see it is fly a probe a billion kilometers or so and have it dig through kilometers of ice.  Keeping in mind that at the surface temperatures we're looking at, ice acts more like any other mineral than the softish stuff you can chew (to the horror of your dentist).


Since we're speculating, let's say that something we would recognize as an intelligent species develops.  Are they likely to come visit?

Humanity went into space driven by (among other things) the curiosity inspired by the sun, moon, stars and planets.  Even if you don't buy that, you have to admit that we could at least see the sun, moon, stars and planets without any technological help.  Our hypothetical ocean-dwellers could live indefinitely with no clue that there was a solar system out there, or anything else at all.  They would have no direct way of knowing that the parent planet, or even the surface of their own moon, existed.

Even the existence of gravity might be the subject of intense debate.  These moons are relatively small.  The surface gravity of Ganymede, for example, is about 0.1g.  If you're considerably beneath the surface the influence of gravity decreases since you also have matter above you, but that's probably not a major factor at the scales we're dealing with.  If our own oceans are any clue, most things will be near neutral buoyancy.

Put that together and you have a general tendency for most living things to float around in an ocean layer dozens of kilometers deep (thick?) and thousands of kilometers around.  Some inanimate things would tend to drift, very slowly, toward the inner surface (that is, sink).  Others would tend to drift, very slowly, toward the outer surface (that is, float).  This would not be nearly so easy to sort out as the general tendency of things to fall quickly to the ground on Earth.

Figuring out that the world is round would be a significant accomplishment.  The major cues the Greeks used -- ships sinking below the horizon, lunar eclipses, the position of the noontime sun at different latitudes -- would not be available.  The most obvious route left is to actually circumnavigate the world.  And figure out that you did it.

I'm very reluctant to say "such and such would be impossible because ...", but I think it's safe to say that a number of things we take for granted living on the solid surface of a planet with an atmosphere transparent in many wavelengths would be a lot harder in these worlds.  On the other hand, if an inhabitant ever did make it to space, they'd probably have a much better intuitive feel for it than we do.



There's one major factor I haven't mentioned here, that's been cited as a reason that no underwater species could ever develop technology.  Fire doesn't work.  Without fire, a host of things become much harder, if not impossible, including metal extraction and heat engines (steam, internal combustion, etc.).  Underwater rocketry also seems like a stretch, though jet propulsion is not a problem (the difference is that jet propulsion uses the medium one is traveling through, while a rocket creates its own exhaust).

From an earthbound perspective, knowing about our technologies, it's easy to say what familiar technologies probably wouldn't work in such a world, at least not the way they work here.  It's harder, though, to say what unfamiliar technologies could work.  This makes it tempting to say things like "There's no way that life in Ganymede's oceans could contact us.  They wouldn't even have fire."

But we've had, depending on how you count, at least thousands of years to figure technology out.  Depending on how things develop, particularly the transition from single-celled to multi-celled organisms, our hypothetical counterparts might have had tens of thousands, or millions of years.  Or no time at all since they haven't gotten far enough yet.

So what's at least possible in such a world?  Here are some wild, sketchily-informed guesses:

Mathematics: "Can mathematics develop?" seems very much the same as "Can intelligent life develop?", if only as a matter of definition.  If it can't handle at least some form of mathematical thought, can we really call it intelligent? Let's assume that these sea creatures have learned not only to count, but to "think abstractly", whatever that means.  While we're at it, let's assume something we would recognize as a language.

Writing: It's unlikely that anyone is going to whip out a ballpoint pen and write on the back of a napkin, so we need to define "writing" a bit more abstractly.  What we're really after is a permanent means of recording language in via discrete symbols, that is, in digital form.  This doesn't require much, for example, some sort of solid material that can be manipulated and will hold its form.  For example, it doesn't seem unlikely that there could be something resembling rope, which would enable something like the quipu.

Physics: If our critters have mathematical ability and curiosity, they will begin to notice and codify basic facts about their physical world.  Fluid dynamics would be an obvious subject of study, along with some aspects of thermodynamics, particularly at water/ice boundaries.  Electromagnetism, or at least magnetism, is not out of the question.

On the other hand, Newtonian mechanics might take quite a while.  We can ignore wind resistance and get reasonable answers for many problems.  Water resistance is a whole different matter, so it might take quite a while to develop a notion of inertia.  Modern physics -- particle physics, quantum mechanics, relativity, plasma physics, low-temperature physics, etc., requires something like modern technology, of which more in a bit.

Chemistry: This will certainly be tricky.  You can't just pour something in a beaker or dump a solid chemical into a liquid solvent -- at least not without producing the solid chemical in the first place.  But who knows?  If there is life, there are chemical reactions going on all the time naturally.  Perhaps someone learns that a particular gland-y thing from one creature does funny things when you stick a bone-y thing from another creature in it, or that you can use thus-and-such material to isolate a kind of water that acts unusually, and eventually this becomes a systematic body of knowledge.  Not out of the question, and not as much of a leap as astronomy would be, but still doesn't seem like it would be a strong suit.

Biology: Large parts of biology -- cell theory, for example -- require some sort of microscope, but other aspects just require careful observations of other living things.  Evolution is an interesting example.  It's not clear what kind of fossil record there might be, though non-living things would tend to float or sink, but evolution fundamentally requires figuring out that there's a family relationship among seemingly different living things, and realizing that the world is old.  That's certainly not out of the question.

Materials science: Metallurgy requires a supply of metal, which would likely be hard to come by, but one could learn a lot about the properties of the materials around -- tensile strength, hardness, elasticity and even, with more careful observation than we need in our environment, density, heat capacity and such.

Cities: There doesn't seem to be any fundamental reason there couldn't be something we might call  agriculture, and floating cities, at least, forming around it.  Suppose again that there's some sort of rope-like material, and suppose something edible likes to grow on it.  It might then be natural to build largish structures and garden them.  This in turn would provide a reason to stay close instead of wandering off, and along come civilization and its discontents.

Fire: At some point a technological species needs to figure out how to do things that don't happen easily in the natural world.  For us, you could argue it was extracting metals sometime in prehistory, or maybe harnessing steam, or maybe something in between.  For an undersea world, creating an environment where things could burn would be not only a significant achievement, but a gateway to what we might consider industrial technology.  If they can make fire, then it's hard to think what human technology would be out of reach, because to make fire, you need to recreate an environment not too different from ours.  At the very least you need a bubble of some sort of oxidizing atmosphere.  Once that happens, pretty much everything that seems implausible on the list above becomes possible.


So who knows?  There doesn't seem to be anything in principle keeping life on some extraterrestrial ocean from being able to get to us.  It just requires a long chain of unlikely events.  But then, life is full of those.  If you roll the dice billions of times and you get to keep lucky changes around for the next roll, the extremely unlikely can become quite plausible.  In this case, our chain of events looks something like
  • A self-replicating molecule develops
  • Microorganisms develop
  • Multicellular organisms develop
  • Some of them develop what we might call intelligence
  • These develop a body of mathematical and scientific knowledge
  • From that, they develop what we would recognize as technological tools, including
    • Ways of storing energy and converting it to work
    • Probably some way of creating sizable bubbles of gas, and ways of working inside them
  • One way or another they find their way to the surface (which is much farther away than humanity has ever managed to dig into the Earth's crust)
  • And then they get in contact with us, somehow.
On the other hand, we know we're coming their way (if they're there).  It's going to be quite some time before we can send a probe to their habitat, but there's one other way we could meet up: One of the reasons we believe that Ganymede and other worlds have water: cryovulcanism, that is, liquids coming to the surface through cracks in the crust (the "cryo" part is there because these liquids are a lot colder than the magma we're familiar with).

If water is coming up from the liquid interior of one of these moons, it might well carry something along with it.  With luck, it might be something like a tardigrade that can survive being dried out and frozen.  Or there might just be a fossil bed of life forms that might not have even made it to the surface alive.

Or, maybe, just maybe (and by "just maybe" I mean "almost certainly not, given all the things that would have to go right"), it might be a hardy explorer, wearing some sort of protective suit, struggling against the surface gravity that we would consider negligible, and wondering at the bizarre new world, and ... what's this thing coming at me from the ... what do you even call it without knowing "stars" and "empty space"?

Saturday, March 21, 2015

Fermi and the revenge of the machines

Previously, I argued that the odds of our actually directly detecting an intelligent species from another star system are very low, partly because of the vast distances involved, but also because of timing.  However, there is another possibility besides detecting such a species on their home world: maybe they'll come to us.

"Or maybe we'll come to them", you might counter, and maybe, eventually.  I'm going to leave that aside for now because, while we have some interesting theoretical ideas of how we might reach another star system, at this point they're just that, and the more realistic ones involve moving maybe a ton of payload to a nearby star in somewhere around a human lifetime, and score around a seven on the scale of difficulty I gave earlier.

The amount of energy involved in doing even that is staggering.  By comparison, the New Horizons probe that's currently nearing the Pluto system as I write this was launched on one of the larger rockets ever produced, received a significant gravitational boost from Jupiter, and has been traveling for around ten years.  To get to Proxima Centauri in a small number of decades, you'd need to be traveling around 2000 times as fast.

The energy required goes as the square of the velocity (until you get to relativistic speeds, where it gets much, much worse), so we're looking at 4 million of the rocket that launched New Horizons.  Put another way, the kinetic energy of a one-ton object moving one-tenth the speed of light is comparable to
Actually accelerating such an object to that speed would take considerably more energy.

However, one of the main points of the previous post was that some significant portion of intelligent species would have arisen millions or hundreds of millions of years ago, and another was that there were likely a great many such species scattered throughout the galaxy, not to mention other galaxies.  Maybe billions in our galaxy alone, using not-totally-implausible guesses. Put that together, and there might be a large number of species which have had plenty of time to reach us, and plenty of time to develop the technology to do it.  The timing that undermined our prospects of spotting other life directly actually works in our favor now.

So let's say that some technological species arose ten million years ago and 50,000 light-years away.  If they'd built a craft capable of traveling 1/200 the speed of light, or about 100 times faster than New Horizons, that craft could have reached us by now.  And maybe even have stopped.

Or, it could have reached any of a few hundred billion other stars.  Even if it had launched, say, ten craft a year for a million years, each equipped with a means of navigating to a distant star and then slowing down, the odds would still be about 10,000 to one against one reaching us.


However, there's a scheme that's been floating around for quite a while that would dramatically increase the chances of something eventually getting to us: send self-replicating craft.  This is not something we could do right now but it's quite plausible that we could in the next century or so.  It doesn't seem too much to ask that some civilization, somewhere, with a million or more years of head start, could have done this.

It's an interesting scenario to contemplate.  Weight is critical at these speeds, so the craft will probably carry only the bare minimum it would need.  It would enter a new star system and find a solid planet or moon to set down on.  It would then start digging into the surface (assuming there's no one there to run across it and ask "Hey, what's this?") and gradually assemble raw materials.  From those it would assemble basic tools, use those to assemble more sophisticated tools, and eventually spacecraft parts, which it would then proceed to put together.  Some amount of time later it will have made a copy of itself, which would then take off for a nearby star system, while the original goes on building copies of itself ...


You'd want to be a little careful with this.  If you literally sent a copy to every nearby star system, that would include ones you've already visited (even the home world), and some of those would make their way back to where they started, and start making copies of themselves.  That is, you would get exponential growth, everywhere the probes visit.

The total number of craft might double every few centuries.  If every copy used a ton of raw materials, they would consume the mass of our Moon in about 70 generations of copying, or a few dozen millennia.  In a few million years, probes could eat every planet, planetoid and moon in the galaxy.  Or, at least, every unpopulated one.  Or, at least, the portion of the material of the unpopulated ones that was suited for making the craft.

The polite way to send such probes would be to remember which places you'd already visited and only send copies to new places.  In technical terms, this results in a breadth-first search of the galaxy.  Instead of exponential growth you get cubic, that is, a steadily expanding radius of exploration.  In our scenario of craft traveling 1/200 the speed of light (and not taking too long to build copies), this would cover the whole galaxy in around 20 million years, starting at the edge, or 10 million starting near the center.

In sum, if anywhere in the hundreds of billions of stars in the galaxy 20 million years ago someone had successfully launched a self-replicating interstellar probe, one copy (or more) could have made it to the solar system.  A heady thought, to say the least.


One question that always bothered me about this scenario was "why?"  Even if the various probes could relay information back to the home world (and this is totally handwaving how one would produce a signal strong enough), the finite speed of light becomes a problem.  It's going to take a thousand years to get information back from a system a thousand light years away, and 200,000 from one edge of the galaxy to the other.  Why bother?  I mean, who am I to say how long a lifespan, or attention span, an alien species might have, or what its motivations might be, but still ... it's hard to see the point.

However, you don't have to set out to cover the whole galaxy in order to end up doing it.  Suppose we just wanted to explore our neighborhood of a few thousand stars?  A self-replicating probe would be cheaper and easier than trying to send a probe to every star individually.  In which case, why stop?  If we're interested in data from a star 50 light-years away, why not 60, or 100?  The easiest approach is to just let the probes keep copying.

A bit unsettlingly, it's even easier not to bother screening out already visited systems, that is, letting exponential growth continue unchecked.

In a slightly less scary variant, the probes somehow send messages to each other ("heartbeats", effectively) and only send copies to stars that aren't sending anything.  This way if a probe goes dead, another will eventually take its place and the galaxy will remain completely covered indefinitely.  Among other things, this means that if you're actively trying to get rid of probes, that may turn out to be somewhat difficult -- it only encourages them.


Once we get into galaxy-spanning schemes, we're looking at hundreds of billions of copies.  There are bound to be a few mistakes here and there, and those mistakes may continue to propagate.  The ones that do will continue to make copies, and some of those copies will be imperfect.  "Mistake" here just means "different from the original".   Such a mistake might be innocuous, or it might result in a less worthy probe, or it might even result in an improvement.  One way or another, the population will evolve over time.

So we have something that can use energy in an organized way, reproduce and evolve.  We may as well call that "life".  It's most likely not the same life form as the one that first built it on the original home world, but that seems like a minor point.  It's quite possible that there is life spreading out through the galaxy, or already occupying every corner of it, even while its creators are still confined to a single world, or no longer around at all.  And that life might well have reached us, or even be here now.

This makes the question of "why haven't we noticed" a bit more interesting.