Monday, November 29, 2010

Copernicus and revolutions

Before Copernicus, everyone thought that the earth was the center of the universe.  Then Copernicus, in De Revolutionibus Orbium Coelestium, said that planets, including the earth, revolved around the sun.  Thus did science triumph over tradition and superstition.

Well, that's the Short Attention Span Theater version.  It has the advantage of being short and memorable, and the disadvantage of not being particularly near reality.  Yes, Copernicus did write De Revolutionibus, and yes, it did have the earth revolving around the sun, but heliocentric theories go back at least to Aristarchus of Samos, and it took another 200 years after Copernicus for the idea to take really firm hold in the scientific community (a somewhat anachronistic term in itself, but never mind).

Much has been made of controversy with the Church over the theory, but that came later.  Copernicus published the book with the aid of his friend Bishop Giese and dedicated it to Pope Paul III.  Nor does De Revolutionibus usher in a fully-formed modern view of the universe.  Copernicus postulates eight celestial spheres, with the fixed stars in the outermost, each planet moving in a perfect circle.

Copernicus does not present new data that can't be made to fit with the older geocentric view.  He reanalyzes centuries of observations that had been explained by a fairly complex system of cycles and epicycles, explaining them by a somewhat simpler system of cycles and epicycles.  The infamous epicycles are still needed because the planets don't actually move in perfect circles.



Copernicus's work is often considered important because it regards the earth not as the center of the universe, that is, as a special, distinguished place, but as a part of it, a planet on an equal standing with the other planets.  This notion that we do not occupy a special place is central to modern cosmology, extending even to the notion that the particular universe we occupy is not necessarily special, despite possessing such apparently unlikely features as solid matter and cosmologists.  In this view of science as the great dethroner of humanity, Darwin delivered a final insult by arguing that we are not even special among animals, but rather Just Another Ape.

There is merit in this view, even though (or maybe because) the notion that we are special creatures in a special place remains quite popular.  However, the Copernican shift can also be seen as one of a long line of cases where a simple and reasonable assumption turned out not to be true.  For example
  • That the earth is not flat but an enormously large globe (enormous on a human scale, at least)
  • That the earth revolves around the sun and not the other way around
  • That the other planets are not points of light, but worlds at least somewhat like ours, most with their own moons
  • That the stars are not points of light, but suns like our own
  • That the Milky Way is a vast collection of stars, of which our own sun and the stars we see at night are part
  • That "spiral nebulae" also consist of large numbers of stars and are indeed galaxies like our own.
  • That the planets do not move in perfectly circular orbits, but ellipses
  • That there are many, many objects in the solar system that are not planets (in the famous case of Pluto, something we had considered a planet looks to be better described as something else)
  • That not everything in the solar system moves like a planet; for example, some objects move from an inner orbit to an outer one and back over time.
  • That what appear to be single stars are often systems of two or more stars
  • That the the fixed stars are actually not fixed, but moving
  • That stars are not eternal, but are born and die
In many of these cases, but not all, the new view does make our position less special.  The driving force behind these shifts, however, isn't a desire to make humanity less special, but a desire to find simple, coherent explanations that fit the facts.  It's striking that many, but again not all, of the shifts listed above are toward a more uniform view -- the earth is of a kind with the other planets, the sun with other stars, the Milky Way with other galaxies.  Such a shift makes our place less special, but more as a side effect than as a specific aim. 

The most uniform explanation is not always right, though.  Stars like our sun are relatively rare.  Most stars are significantly larger or smaller, hotter or cooler.  Single-star systems are a majority, but stars in multiple systems constitute a significant minority.  Of the planets in our solar system, only one has significant liquid surface water.  There are many more Kuiper Belt objects than proper planets.  Even neglecting KBOs, asteroids, comets and such, there are many more moons in the solar system than planets.  Putting it all together, a rocky planet with liquid surface water orbiting a single star is almost certainly fairly rare, even if planets in general are abundant.



In the early 20th century it was discovered that distant objects are moving away from us, and the more distant the object, the faster it is moving.  The effect is uniform in all directions, within the limits of measurement [once you subtract out the dipole anisotropy -- like pretty much everything else, we are moving slightly, relative to the general expansion of the universe -- but that's a subtle effect and wasn't discovered until much later -- D.H.].  The conclusion is obvious: We are at the center of the universe, a unique spot whence everything else recedes.  This conclusion was rejected in favor of one which does not require us to sit in a special place: The entire universe is expanding and, other factors being equal, everything is moving away from everything else.  Again, this is the more uniform view, and evidence has borne it out over the decades.

How might I add that to the list above?
  • That the universe is not static, but expanding?
  • That our solar system is not the center of the universe, but just another part of it?
The first, I think, more closely reflects the actual development of thought.  The second fits the Copernican revolution model, but only by setting up a strawman.  By the time the Hubble expansion was discovered, it was already a given that our place is not special, so much so that what might have been taken as game-changing new evidence of our special place was quickly interpreted as just the opposite.


Pitting rational science against irrational human egocentricity makes a good story, but there's a more mundane reading to be found:  Science likes uniformity, and it likes uniformity so much that a nicely uniform explanation of known facts will eventually push aside our natural, egocentric concepts.

Put briefly and in retrospect, many of the shifts listed above seem blindingly obvious.  Why assume that the sun is different from other stars?  Why assume that ours is the only galaxy?  But this forgets the flip side of uniformity:  The most natural assumption is that things which appear different really are different. 

The earth appears to us as a huge surface with many features.  The other planets appear to us as tiny lights in the sky.  The sun is a blindingly bright ball.  The stars are more tiny lights in the sky.  The Milky Way is a huge swatch through the sky.  Spiral nebulae are tiny, in almost all cases much too small to be seen by the naked eye.  And, in the most famous case, planets really do appear to move around us in the sky, with occasional backtracking.  We're down here, they're up there.  The geocentric view, however egocentric it might be, was also the most natural and prudent until a more compelling story came along.

Tuesday, November 16, 2010

Navigating underground

I've always loved subways/undergrounds.  Even packed cheek-to-jowl into an un-air-conditioned Circle or District line car in the middle of the (then) hottest summer on record, in a suit, I still loved the posters and ads, the station architecture and decor, the endless parade of passengers, the nearly endless escalators, the tabloid news stands, the surprising variety of little shops tucked away ... even the names of the stations, the sound of the wheels and the brakes, the generally indecipherable announcements and the sheer urban gothiness of the tunnels themselves.  Sort of like an old-fashioned carnival funhouse ride but way, way cooler.

But there is another, more practical reason that I love subway systems: They make navigating a strange city nearly foolproof.  You only need to know two things: What stop your destination is at, and how to get to and from the system.  If you're just seeing the major sites, both of those are generally dead easy: the names of the stops are invariably listed in the guide book, and guess what -- stations tend to be built right by major landmarks.  Even if you're not visiting a major landmark, chances are whomever you're visiting will tell you the name of their station and the same technique will work.

All you really have to do is follow the greatly-simplified system map, make the right transfers and avoid Baker Street.  Unless you're on a tight schedule, you can essentially treat the whole system as a single point.  Your route is Point A -- subway -- Point B.

Such conceptual simplicity is so handy that one can spend months in many cities without learning more than the bare rudiments of the above-ground layout.  This is not entirely a good thing.  Apart from missing the richness of sights to be encountered by straying into this side-street or that arcade (but not that one; the less said about it the better), there are surprisingly many cases where it would be faster just to walk.

An underground transit system has a cognitive character all of its own.  Traveling above ground, you can generally see where you're going and gauge turns and distances reasonably well.  Underground, after several twists and turns of stairways and corridors, lurching starts and stops, and a few subtle or not-so-subtle bends, I personally find I might as well be playing pin-the-tail-on-the-donkey.

And yet the human brain, adapted to navigating outdoors and on foot, seems to cope reasonably well with the time-passes-and-then-you're-elsewhere nature of subway riding, even when the mental map of the territory above is largely blank.  A mental map developed solely from underground transit will have significant distortions, of course, but these don't seem to hurt much.  Once the real landscape becomes familiar, this more accurate view tends to supplant the earlier one (at least in my experience) and the below-ground journey starts to make a bit more sense.

The brain is used to meshing different sets of information, so perhaps this isn't surprising, but I get the definite feeling that more is going on here beneath the surface (so to speak) than one might think.

Thursday, October 21, 2010

Parts of speech

We all learned about nouns, verbs and their friends in elementary school.  A typical list is
  • noun
  • pronoun
  • adjective
  • verb
  • adverb
  • preposition
  • conjunction
  • interjection
(See here for more detail)

These are generally useful categories, but if you're really trying to figure out a language, you have to slice a little finer.  For example there are
  • Transitive verbs (ones that take an object, like hit)
  • Intransitive verbs (ones that don't take an object, at least not typically, like sit)
  • Modal auxiliary verbs (like can, could, may and might, which some dialects can stack up into lovely constructions like might could and may can)
  • Phrasal verbs, like get up
  • Countable nouns, like tree
  • Uncountable (or mass) nouns like water
  • Pluralia tanta (always-plural nouns -- singular plurale tantum) like scissors
  • Comparable adjectives, like tall
  • Uncomparable adjectives, like dead, or NP-complete
  • Determiners, including articles like the or an, but also adjectives like some, any, all, this or that
  • Comparators, like more, most, less and least
Such distinctions go some way towards predicting what words can and can't be used together.  For example, you don't normally use comparators like more with uncomparable adjectives:
  • Smith is more famous than Jones.
  • *Graph isomorphism is more NP-complete than 3-Sat.
(The * at the beginning indicates something that wouldn't normally be said.  I'm fudging with "wouldn't normally be said" instead of "incorrect" or "ungrammatical" as it is notoriously easy in general to invent contexts in which a given construct would make sense.)

This being language, the boundaries aren't perfectly crisp.  Mass nouns don't generally appear as plurals, but there are a few exceptions, for example
  • When referring to some standard serving, as in I ordered three waters.
  • When referring to different types of a given substance, as in She preferred the wines of Bordeaux.
Some nouns can work both ways, for example
  • Hand me a brick.
  • We need five tons of brick.
And let's not even get into whether it's OK to say "more unique" even though unique is supposed to be an absolute and therefore uncomparable.

The word that got me thinking of all this was summit, as a verb meaning "to reach the summit of".  As the "of" would suggest, this verb is generally transitive -- it takes an object, as in Apa Sherpa summited Everest for the twentieth time (which he actually did, last May).  However, the object is often omitted, as in Apa Sherpa summited for the twentieth time.  In contexts where this would be said, it would be abundantly clear that Everest was the peak in question.  In particular, it doesn't matter how many other peaks he might have climbed how many times.

So is summit then acting as an intransitive verb, or a transitive one with an implied object?  I tend towards the latter, as would most grammarians, I believe.  But what about more common cases like sing?  In I sang, there is no implication that I sang any particular song, so one would think sing is acting intransitively.  But I must have been singing something.  Is it really acting transitively, but with an implied, unspecified object?  At some point, such qualifications cease to pull their own weight.  As the man said, volleyball is technically racketless team ping-pong, played with an inflated ball and raised net while standing on the table, but what does that buy us?

What interests me here is how grammar, which is by definition pure syntax, seems unable to stay cleanly separated from semantics.  For example, some mass nouns resist the plural
  • * I would like three neutroniums.
  • * He was a connoisseur of neutroniums.
In the first example, one does not serve neutronium.  In the second, there is only one kind of neutronium.  How would we detect such errors?  I would think the process is something like
  • In a construction like three neutroniums, if the object is a substance, we expect it to mean a particular sort of container full of the substance.
  • But that doesn't make sense in the case of neutronium.
In that view, the syntax is fine and the error is semantic.  Mass nouns, then, are syntactically nouns, but ones whose plural forms have particular semantic features.  Similarly, whether a verb is used transitively or intransitively is a syntactic distinction, but whether there is an implied object is semantic concern.

Except that "object" is a syntactic concept.  One way of reconciling this is to posit that the syntactic form Apa Sherpa summited, for example, is somehow transformed into the form Apa Sherpa summited Everest, with Everest as the object.  The choice of "transformed" here deliberately suggests transformational grammar, though I'm not sure that's completely appropriate.

Another would be to posit that the form Apa Sherpa summited gets transformed into some internal structure, in which the concept represented by summited requires something acting in the semantic role of "thing which is summited", which we may as well call an "object", albeit with some risk of confusion.  This putative internal structure would be describable in words, for example Apa Sherpa summited, or He summited, or Apa Sherpa summited Everest, or Everest was summited by Apa Sherpa and so on, but it would be an essentially different structure from any of those sentences.  As I very dimly undestand it, this is more along the lines of cognitive grammar.

Thursday, October 7, 2010

How much do we know?

The question here is not how much does humanity know collectively, or how much do we know about some given topic compared to how much we don't, or what portion of things can we reasonably say we "know" as opposed to believing or being "fairly sure" or such.  Those are all interesting questions, but what I'm after here is more literal.  How much does a typical human being know, by some objective measure?

To get the flavor of the question, it has been estimated that the average high-school graduate knows about 40,000 vocabulary items, or listemes.  A listeme is a word, word part or collection of words that you have to memorize in order to understand, as opposed to something you can understand by breaking it into parts you already know. For example
  • There are two listemes in "listemes": listeme itself and the plural marker -s.  If you understand both of those, you can understand their combination [Or three: list, -eme and -s, if you're a linguist and familiar with morpheme, phoneme and such -- see below -- D.H.].
  • Typical acronyms and such are listemes: USA or LOL, for example, even though the parts they stand for are well known, because you have to know which words the letters stand for.
  • Idioms are listemes.  Knowing flying and saucer is not enough to know flying saucer.
  • Proper names are listemes.  You have to learn that Muskegon is a city and that Michael Jordan is a former NBA player, even if you already know that Michael and Jordan are names.
  • To some extent, different senses of words count as different listemes.  Knowing that you can eat off a plate doesn't tell you how to plate something in gold or what it means for a batter to step up to the plate.
  • Listemes are somewhat subjective.  Someone well-versed in Latin might see intermittent or conjecture as made up of simpler parts, while for most of us they're one listeme each, and of course different languages have largely different listemes.
Each listeme binds a largely arbitrary sign to a meaning.  At a bare minimum, then, our typical high school grad knows 40,000 items, however much knowledge an item might represent.  Now, I make no pretense of knowing how the mind really represents such things, but the title of this blog is Intermittent Conjecture, so it seems that by a miraculous coincidence I've left myself room to speculate.

I would guess that typical listemes are associated with bundles of memories and their relations to other memories.   For example, plate might perhaps conjure up images of typical dinner plates and memories of eating and setting the table and such; images of plated items one may have encountered or a representation of the plating process; images from a baseball game with a batter in stance or a runner sliding into home.

Similarly to how words may be defined with other words, these bundles of images will typically overlap.  A memory of a dinner plate may include an image of a table, or of eating, making "something you put on the table" or "something you eat off of" natural, if incomplete, answers to "What's a plate?"

I've used "memory" and "image" fairly interchangeably here, but I suspect that the images that concepts are built on are nothing like fully detailed pictures or movies.  Rather, they're highly abstracted, with only the relevant features retained.

By this line of speculation, those 40,000 listemes might represent 400,000 or 1,000,000 or more images, grouped into concepts and with arbitrary signs attached.  There is much, much more to the picture, of course, but again we're just trying to get a rough estimate of what's in a typical brain.

Words are only one window into the contents of the mind.  We also know things we can't easily put into words, which one reason I had wanted to talk about different kinds of knowledge and formal vs informal education.  We learn to walk instinctively, and so it's much harder to characterize what sort of things one must "know" in oder to walk, yet if we can learn something, there must be some kind of knowledge involved.  Likewise for other skills like skiing or playing the trumpet, which we learn consciously and in many cases formally, but without necessarily learning a lot of vocabulary in the process.

We can also make associations unconsciously and non-verbally.  When the pioneer Lucky Bill in the post I linked to above looks off and sees bad weather brewing in the clouds, he probably doesn't have words for what he's sensing, but it's definitely something he's learned and knows, just as he knows how to let his horse know it's time to go.  This knowledge may well be built on the same sort of memories and images that we pin language onto, but it's not readily accessible to language.

If we take a mental image -- an abstracted memory -- as the basic unit of knowledge, with images grouped into concepts which may or may not have language attached, then it seems plausible that an adult human could have millions or tens of millions of such images.  We must also allow some capacity for storing the relations among the various images, concepts signs and so forth, but such "metadata" tends to be much smaller than the data it helps organize (see this post on the other blog for more on that).

Being a compugeek, the handiest objective measure of information I have is the byte.  Leaving aside that images may differ widely in size and taking an image to be on the order of a megabyte -- a completely wild guess which may well be off by orders of magnitude -- that would put our mental storage capacity on the order of terabytes or dozens of terabytes.

Until fairly recently, that was a lot of storage, but these days it's not a staggering figure.  As far as putting together something of the same order as a human brain, we may just now be reaching a necessary, but not necessarily sufficient, technological milestone.

I'm happy to learn that the wild stab in the dark given above turns out to be reasonably in line with other wild stabs in the dark.  See for example this Google Answers page (I didn't have a lot of luck tracing this back to the literature, but since it's all guesswork I'm not going to worry about it).

Thursday, September 23, 2010

Jockomo Jockomo

I've been listening to Our New Orleans, a post-Katrina benefit album of some of New Orleans' best musicians, which of course got me to pondering a venerable question.  Just what does Iko Iko mean?

A little poking around discovers that there are a great many versions of the song Iko Iko itself (no surprise) and that the first two were put out by James "Sugar Boy" Crawford (under the title Jock-o-mo) and the Dixie Cups (under the title Iko Iko). Neither of these makes any claim of originality.  Both Crawford and the Dixie Cups freely admit that they merely repeated what they'd heard, Crawford from Mardi Gras Indian chants, the Dixie Cups from their grandmother (who in turn probably heard it at Mardis Gras).  This much is pretty clear, but little else is.  In particular, what do the words mean and, prerequisite to that, what are the words?

The second question can only be answered approximately, since the song has been covered so many times, often (probably nearly always) by performers with little or no idea of its possible meaning.  This in turn has led to disputes on the order of "Is it chockomo, jockomo, jock-o-mo or what?" "Is it Iko or Aiko"  "Is it wa na ne or ah na ne?" and so forth.  Keeping in mind that individuals' accents and enunciation may vary and that language itself is fairly shifty, a reasonable rendition of the mystery lyrics is:
Hey now, Hey now
Iko, Iko, an day
Jockomo fee no wah na nay
Jockomo fee na nay
Interpretations vary. One is:
Code language!
God is watching
Jacouman causes it; we will be emancipated
Jacouman urges it; we will wait.
Another is
Hey now! Hey now!
Listen, listen at the back
Jocomo made our king be born
Jocomo made it happen.
What's to pick between them?

Explanations are also offered for pieces here and there. Who or what is "Jockomo"? No less an authority than Dr. John says Jockomo is a jester. A linguist familiar with Native American trade languages says that chockema feena means "very good" in a now-extinct jargon. What does Iko Iko mean? A Ghanaian linguist says that Ayeko Ayeko is found in a West African chant and means "Well done, congratulations".

Why would it be so difficult to track down the meaning of a song heard and sung by millions? For a start, New Orleans is one of the most multicultural cities in the world, let alone the United States. As Dr. John says, it's a place where "Nothing is purely itself but becomes part of one funky gumbo." Add to that the complex nature of the ingredients themselves — Creole culture, Hatian culture, French and other European cultures, hundreds of West African and Native American languages/dialects, and so forth, many lost or only recently recorded — it would seem almost inevitable that origins become hard to trace.

It's clear from even casual comparison that the song has deep and direct African musical roots. This again is hardly surprising for New Orleans, but some cases are more obvious than others (Louis Armstrong's classic What a Wonderful World seems less obviously African-influenced). It also pretty clear that the rhythms came to the Mardi Gras Indians by way of the Caribbean, in particular Haiti, whose Kata rhythms are clearly similar (compare this and this, for example). The origins of a tune aren't necessarily the same as the origins of its lyrics, but it's not implausible that the Mardi Gras Indians' chants are also Caribbean in origin.

The two translations given above, different though they are, both accord with that general backdrop. The first, based on notes unfortunately lost in the Storm along with so many other cultural treasures, uses a mixture of Yoruba (West African) and Creole French.

The second translation is better documented, at least, and is pure Louisiana Creole:
  • akou(t) = French ecoute = listen
  • an deye = French en derrière = in back
  • Jockomo is a name, possibly Giacomo, possibly "little Jacques"
  • fi = French fit = made
  • no wa = French notre roi = our king
  • ne = French né = born
  • the particle na, being peculiar to Creole, makes it "to be born"
This all seems reasonable enough, particularly since the Creole words given all exist in other sources, the words are in the right order and the meaning, while a bit puzzling, is not a complete word salad.  But while the general drift seems reasonable, I'm not entirely convinced.

For example, while "Hey now, listen all the way in the back!" makes sense and fits with Mardi Gras chiefs facing off and taunting each other, wouldn't the accent on akout be on the last syllable, not the first?  More troubling, though, is the use of the phrase Jockomo feena nay in other places, such as the Wild Magnolias' Brother John is Gone/Herc Jolly John (which is what brought us here in the first place):
Jockomo feena nay, Jockomo feena nay
If you don't like what the Big Chief say
Then jockomo feena nay!
This fits better with another gloss that's been given: "Joker, kiss my ass" (or similar). The Magnolias also exhort the audience to "make Jockomo any way you want", which casts a little doubt on Jockomo being a personal name.

As to the Yoruba/Creole interpretation, I have no idea, except that "code language" is probably not meant as a literal interpretation, rather that "Hey now" (or "Ena" or whatever) is a code meant to call people together. Which seems, um, probable. Again, though, "Jocouman will emancipate us" doesn't seem a likely riposte to someone who doesn't like what the Big Chief says.  On the other hand, if the original meaning had been lost, more likely in the case of the Yoruba hypothesis, then perhaps a now-nonsensical string of syllables could be turned into a taunt simply by use as such.


So, what to make of all this?  Until someone can come up with something more definitive, like a written record from the 19th century, we're really reading tea leaves.  I find the Creole idea the best of the lot, but I wouldn't want to say it's "the" definitive meaning.

More interesting, though, is the way that the funky gumbo, where nothing is purely itself, leads us on a fascinating journey of guesswork with only a provisional, incomplete resolution.  That's New Orleans for you.



Sources:

On the nature of change in science

Hmm ... wonder if anyone's ever tackled this subject before ...

These days, schools across the nation teach a scientific theory that has been known for decades to be fundamentally wrong, while the much more accurate theories that supplanted it are generally only mentioned briefly outside advanced courses aimed at specialists.  How can this scandal go on?

I'm speaking, of course, of Newtonian mechanics, with its tendency of a body at rest to remain at rest and an equal and opposite reaction for every action.  The crowning achievements of 20th-century physics, namely relativity and quantum mechanics (QM), arose from discoveries that the predictions of Newtonian mechanics simply don't hold under certain conditions (now if only they could be made to play nicely with each other).

Granted, the conditions in question are extreme.  Quantum effects generally don't matter outside submicroscopic scales, and relativistic effects are only easily noticeable at extremely high speeds or in extremely strong gravitational fields.  Nonetheless, the effects are of more than purely scientific interest.  The GPS system, for example, could not have been built without knowledge of quantum effects (for the chips in the satellites and receivers for example) and relativistic effects (which cause the highly accurate clocks, needed in order to pinpoint location, to run slightly faster in orbit than on the ground).

So why do we cling to this outmoded theory?  Simple.  It gives the right answers in the kind of cases most people will encounter.  How fast would that car have to have been going to have skidded for the distance it did?  Newton can handle that one.  What are the stresses on the deck of that bridge?  Newton can deal with it.  Why do the daily tides rise and fall?  Newton himself did the numbers on that one.  Why does the orbit of Mercury precess just a wee bit more than it ought?  Um, actually you need general relativity for that one.

For a new theory to take hold, it has to be more than new.  However much its mechanisms may differ from those of the old theory -- and QM and relativity differ radically from Newton in that respect -- it must still explain the same facts that the old theory explained.  Thus the correspondence principle of QM, which states that QM and classical (Newtonian) mechanics give essentially the same results when large enough numbers are involved.  Given that there are stupefyingly many atoms in anything we can actually see or touch, it's not hard to encounter numbers large enough for the correspondence to hold.  In fact, it generally takes work to narrow things to the point that QM comes to the fore.

A new theory also has to explain some things better than the old theory.  For example, QM explains why subatomic particles don't behave completely like ideal Newtonian particles and relativity explains why planets don't quite exactly follow the paths that classical mechanics predicts.

New theories typically keep most of the concepts of the theories they replace, but often generalize them or interpret them in the new ways.  For example, the conservation laws concerning quantities such as energy, momentum and angular momentum, which were derived from Newton's laws as classical physics developed, play a central role in QM.  Newton's idea that bodies travel in straight lines in the absence of outside forces becomes Einstein's idea that a body in orbit, for example, is traveling in a path that is (locally) straight, but in curved space-time.

Many concepts make it through unchanged, for example the concept of things having mass, or charge, or being able to move.  In fact, most concepts will have to remain unchanged.  The whole field of physics assumes that there is a physical world with space and time, that it's possible to conduct experiments and get reproducible results, and so forth.  These might seem too trivial to mention, but given the sort of things that QM and relativity do revisit, for example to what extent things can have a definite location or whether it's possible to say two things happened at the same time, no concept seems to trivial to count.

Even at its most radical, science is fundamentally conservative.  An established theory, even one with known problems, is assumed to hold until there is compelling reason to adopt a new one, and even in that case the old theory may well remain useful.  I've used physics here as a running example, but the same holds true in any scientific field.

So why do we continually hear about revolutionary advances and theories being overturned?  There are probably several reasons:
  • The press needs good, dramatic story lines because that's what we its audience want.
  • It's natural to focus on what's changed as opposed to what's still the same.  Even an incremental change at the margins is a dramatic change if you only focus on the margins.
  • There's a lot of science going on at any moment.
  • Every once in a while something big really does come along.
All of this seems mostly harmless, so long as it doesn't give the impression that the world at large is liable to change drastically overnight.

Monday, September 13, 2010

Learning: Formal and otherwise

In the previous post, I tried to paint (and then poke at) a stereotypical picture of "book learning" vs. "real-world learning," also known as "street smarts" (except there aren't really any proper streets where Bill lives).  Which kind of learning is better?  Depends on which you think you have more of, of course.

Cognitive science is a well-studied discipline with many interesting results on learning and other activities of the mind.  One of its most significant results is that we don't have a single, general learning capacity, but a variety of learning mechanisms.  Learning to ride a bike is different from learning a language is different from learning people's names is different from learning calculus, etc.  There is good experimental evidence to support all this.

In the typical "book learning"/"real-world learning" dichotomy, formal education is held to be narrow and divorced from the world at large.  But formal learning is not a monolith.  Different subjects require different combinations of lecture, research, rote memorization, structured practice, unstructured practice and so forth.  Teaching calculus well is different from teaching the oboe well is different from teaching experimental chemistry well is different from teaching Shakespeare well.

But what is formal education, anyway?  Does it have to take place in a classroom or for course credit?  Coaching a sport well is a highly structured exercise with its own terminology and a well-developed body of theory and practice.  Likewise for apprenticing to a trade.  The very fact of a recognized trade implies a set of rules and conventions -- forms, in other words -- to be followed.  Formality is about such structures, not the particular venue for learning.

Even taking a broader view of formal learning, though, there is plenty going on outside those bounds.  Learning one's first language, or one's culture, or whether one likes bleu cheese, or the way to the grocery store, or the faces and names of friend and family, or how to walk -- these all happen even without codified rules or explicit teaching, and each has its own character (though learning language and learning culture tend to be closely intertwined).

To the extent it can even be made clearly, any distinction between formal and informal learning is exceedingly coarse grained compared to the mosaics that are actual minds and the intricate subdivisions within each category.  Ironically enough, it's science, putatively cold and reductionist, that has devleoped and provided support for this basic insight.

Thursday, September 9, 2010

The pioneer and the dude

The American West, not so long ago in the big picture: Into town rides a dude, that is to say, a city-dweller from the East.  Call him William.  You can spot him a mile away.  He's dressed funny -- a dark suit in the heat of the day, a hat that'll blow off with the first good gust, polished shoes that won't look so good once he steps off his horse.  Which he can't sit on right, anyhow.  Probably won't last a week.

Watching from a distance is an old hand.  Call him Lucky Bill, Lucky because you need to be a bit lucky to have made it this far.  You couldn't necessarily pick him out in a crowd.  In fact, he and his horse look like part of the landscape.  Lucky Bill looks off to the west for half a second.  Weather coming in.  Better get going.  With a low clicking sound and a subtle movement he tells his horse to move.  The horse already knew to go, maybe from a shift in weight, maybe from some other cue.

Lucky's route home takes him right by William.  As they pass, each has one thought of the other: "How ignorant."

Each has a point.  Has Lucky heard of Ovid, or Milton?  Can he even read?  Can he tie a proper Ascot?  Does he even know the name Beau Brummell?  Put him in the middle of any dinner party in New York and he'd be a curiosity at best.

But of course, New York is a long ways away.  We're on Lucky's turf, and here you need to tie a lasso, not a necktie.  Not much use for Milton and Ovid unless they can help keep a herd from getting spooked.  Better to stick to basics, like how to split wood and build a good fire.


In the proper context, neither William is an ignoramus; each is an expert with extensive knowledge gained from years of experience.  Outside that context, however, it's a different story.

Except that Lucky Bill is just William the dude a few years on.  It wasn't easy, and yes, there was a good bit of luck involved, but the raw greenhorn in the funny get-up was quick enough on the uptake to make a go of it.  His hands are calloused now, his face weathered and his locks shaggy.  His mind is a compendium of crucial local knowledge that's saved his life on at least one occasion.  Does he still remember his poets?  Well yes, he does, and he's not the only one in the area.  The local poetry society meets every other Tuesday, rotating through its (four) members' houses.  Weather and such permitting.

How did he get to where he is now?  How much did he have to learn, and how did he learn it?  Did any of his previous knowledge carry over and if so, how?  What did he have to leave by the wayside and why?  Ample room for conjecture here ...

Wednesday, August 18, 2010

What was AI?

I've previously claimed that, in very broad strokes, Artificial Intelligence has progressed through three stages:
  1. Early breathless predictions (not necessarily by those doing the research) of superhumanly intelligent systems Just Around the Corner.
  2. Harsh reality and a comprehensive round of debunking and disillusionment. Actual research continues anyway.
  3. (The present) All the hard work in stage (2) begins to bear fruit. Respectably hard problems are solved by a combination of persistent and mostly incremental improvements in software, combined with rapidly increasing hardware horsepower.
The curious thing about (3) is that you don't generally hear the term "AI" mentioned in conjunction with these accomplishments, or much at all, at least not outside the major labs  (Stanford's SAIL is still going strong and has always called itself an AI lab, and likewise for MIT's CSAIL, in a snazzy modern building no less).  Even though the current stage, stage 3 by the reckoning above, has provided us with a great deal of useful machinery which would have been called AI in previous times, it's relatively rare to hear engineers outside the field talking about AI as such.  In the early 80s (when I was starting out), you'd hear it quite a bit.


Why isn't, say, a phone that can understand voice commands called AI today? One can plausibly blame fashion. The general public typically sees new technology via its marketing. Most marketing terms have a limited shelf life and "AI" as a marketing term went stale a long, long while back. To compound the matter, the term "AI" is still poisoned by the ugliness of stage 2.

While there is almost certainly something to that theory, I think there's another, more subtle factor at play. On a certain level, AI never meant neural networks, automated proof systems or even speech-enabled phones. It meant exactly what Turing said it meant back in 1950: Artificial Human intelligence -- something that thinks so much like a human that you can't tell it from the real thing. Even sci-fi supercomputers have generally been expected to think like us, only better and faster.

A neural network mining some pile of data, or even a chess program, or voice-enabled phone, is not acting particularly human, though one could argue that the phone comes close in its limited world. Likewise, there are industrial robots all over the place, but none of them looks like it stepped out of I, Robot.

AI under Turing's definition is not a particularly prominent part of the actual research, most likely because people are already good at being people. We tend to use computers for things people aren't good at -- performing massive calculations errorlessly, remembering huge amounts of information, doing repetitive tasks ... those sorts of things. As part of that, it's good if computers relate well to humans -- understanding our languages, adhering to our social conventions and so forth -- and while that's also an active area of research, it's not absolutely necessary or even particularly prominent.

As a result, we have an awful lot of good research and engineering and useful applications, useful enough that we use them even when they're frustratingly imperfect, but we don't have Robbie the Robot or Star Trek's omniscient Computer. If there's a failure here, it's not of engineering, but of imagination. It turns out it's at least as useful if our creations don't think like us.

Tuesday, August 17, 2010

Because "The Web" just wasn't a broad enough topic ...

Well, the title pretty much says it.

After 500 or so posts of Field Notes on the Web, I've decided to relax and stretch out a bit. As I said at the time, Field Notes isn't going away, but the self-imposed ten-post-a-month quota has, leaving more time free for other pursuits such as ... um ... blogging.

Since the whole point of the exercise is to relax a bit, there will be no quota here and the topic will be whatever I feel like at the moment. In other words, it'll be a more or less bog-standard blog.

That said, I expect to stick to non-fiction, particularly commentaries, half-baked analyses and random speculations, roughly on the order of Field Notes but not about the web (if it is about the web, it'll end up on the original blog, of course). I also hope to keep to topics on which there isn't an obvious surplus of opinion in the blogosphere. Better to be a big fish, or perhaps more aptly the only fish, in a small pond.

If you're still with me after all that, welcome aboard! We may not get very far very fast, but I hope at least it'll be a pleasant excursion.