Monday, June 18, 2018

Did clickbait kill the aliens?

Disclaimer: This post is on a darker topic than most.  I've tried to adjust the tone accordingly, but if anything leads you to ask "How can he possibly say that so casually?", rest assured that I don't think any of this is a casual matter.  It's just that if we're talking at the scale of civilizations and stars we have to zoom out considerably from the everyday human scale, to the point where a truly horrible cataclysm becomes just another data point.


As I've noted elsewhere, the Fermi paradox is basically "It looks likely that there's life lots of other places in the universe, so why haven't we been able to detect it -- or why haven't they made it easy by contacting us?"  Or, as Fermi put it, "Where is everybody?"

One easy answer, though something of a downer, is "They're all dead."*

This is the idea that once a species gets to a certain level of technological ability, it's likely to destroy itself.  This notion has been floated before, in the context of the Cold War: Once it became technically possible, it took shockingly little time for humanity to develop enough nuclear weapons to pose a serious threat to itself.  One disturbingly ready conclusion from that was that other civilizations hadn't contacted us because they'd already blown themselves up.

While this might conjure up images of a galaxy full of  the charred, smoking cinders of once vibrant, now completely sterile planets, that's not exactly what the hypothesis requires.  Before going into that in detail, it's probably worth reiterating here that most planets in the galaxy are much too far away to detect directly against the background noise, or to be able to carry on a conversation with (assuming that the speed of light is the cosmic speed limit we think it is).  In order to explain why we haven't heard from anyone, we're really trying to explain why we haven't heard from anyone within, say, a hundred light years.  I've argued elsewhere that that narrows the problem considerably (though maybe not).


A full-scale nuclear exchange by everyone with nuclear weapons would not literally kill all life on Earth.  There are a lot of fungi and bacteria, and a lot of faraway corners like hydrothermal vents for all kinds of life to hide.  It probably wouldn't even kill all of humanity directly, but -- on top of the indescribable death and suffering from the bombing itself -- it would seriously damage the world economy and make life extremely difficult even in areas that weren't directly affected by the initial exchange.  Behind the abstraction of the "world economy" is the flow of food, medicine, energy and other essentials.

There is an extensive literature concerning just how bad things would get under various assumptions, but at some point we're just quibbling over levels of abject misery.  In no realistic case is bombing each other better for anyone involved than not bombing each other.

For our purposes here, the larger point is clear: a species that engages in a full-scale nuclear war is very unlikely to be sending out interstellar probes or operating radio beacons aimed at other stars.  It may not even be sending out much in the way of stray radio signals at all.  It might well be possible for a species in another star system to detect life in such a case without detecting signs of a technological civilization, much less communicating with it.

So how likely is a full-scale nuclear war?  We simply don't know.  So far we've managed to survive several decades of the nuclear age without one, but, as I've previously discussed, that's no time at all when it comes to estimating the likelihood of finding other civilizations.  To totally make up some numbers, supposed that, once nuclear weapons are developed, a world will go an average of a thousand years without seriously using them and then, after the catastrophe, take a couple of centuries to get back to the level of being able to communicate with the rest of the universe.

Again, who knows?  We (fortunately) have very little data to go on here.  In the big picture, though, this would mean that a planet with nuclear weaponry or something similarly dangerous would be 10-20% less likely to be detected than one without.  We also have to guess what portion of alien civilizations would be subject to this, but how likely is it, really, that someone would develop the ability to communicate with the stars without also figuring out how to do anything destructive with its technology?

My guess is  that "able to communicate across interstellar distances" is basically the same as "apt to destroy that ability sooner or later".  This applies particularly strongly to anyone who could actually send an effective interstellar probe.  The  kinetic energy of any macroscopic object traveling close to light speed is huge.  It's hard to imagine being able to harness that level of energy for propulsion without also learning how to direct it toward destruction.

For purposes of calculation, it's probably best to assume a range of scenarios.  In the worst case, a species figures out how to genuinely destroy itself, and perhaps even life on its planet, and is never heard from.  In a nearly-as-bad case, a species spends most of its time recovering from the last major disaster and never really gets to the point of being able to communicate effectively across interstellar distances, and is never heard from.  The upshot is a reduction in the amount of time a civilization might produce a detectable signal (or, in a somewhat different formulation, the average expected signal strength over time).

Our own case is, so far, not so bad, and let's hope it continues that way.  However, along with any other reasons we might not detect life like us on other planets, we can add the possibility that they're too busy killing each other to say hello.


With all that as context, let's consider a recent paper modeling the possibility that a technological civilization ends up disrupting its environment with (from our point of view here, at least) pretty much the same result as a nuclear war.   The authors build a few models, crunch through the math and present some fairly sobering conclusions: Depending on the exact assumptions and parameters, it's possible for a (simulated) civilization to reach a stable equilibrium with its (simulated) environment, but several other outcomes are also entirely plausible: There could be a boom-and-bust that reduces the population to, say, 10% of its peak.  The population could go through a repeating boom/bust cycle.  It could even completely collapse and leave the environment essentially unlivable.

So what does this add to the picture?  Not much, I think.

The paper reads like as a proof-of-concept of the idea of modeling an alien civilization and its environment using the same mathematical tools (dynamical system theory) used to model anything from weather to blood chemistry to crowd behavior and cognitive development.  Fair enough.  There is plenty of well-developed math to apply here, but the math is only as good as the assumptions behind it.

The authors realize this and take care only to make the most general, uncontroversial assumptions possible.  They don't assume anything about what kind of life is on the planet in question, or what kind of resources it uses, or what exact effect using those resources has on the planet.  Their assumptions are on the order of "there is a planet", "there is life on it", "life consumes resources" and so forth.

Relying on few assumptions means that any conclusions you do reach are very general.  On the other hand, if the assumptions support a range of conclusions, how do you pick from amongst them?  Maybe once you run through all the details, any realistic set of assumption leads to a particular outcome -- whether stability or calamity.  Maybe most of the plausible scenarios are in a chaotic region where the slightest change in inputs can make an arbitrarily large difference in outputs.  And so forth.

As far as I can make out, the main result of the paper is that planets, civilizations and their resources can be modeled as dynamical systems.  It doesn't say what particular model is appropriate, much less make any claims about what scenarios are most likely for real civilizations on real exoplanets.  How could it?   Only recently has there been convincing evidence that exoplanets even exist.  The case that there is life on at least some of them is (in my opinion) reasonably persuasive, but circumstantial.  It's way, way too early to make any specific claims about what might or might not happen to civilizations, or even life in general, on other planets.

To be clear, the authors don't seem to be making any such claims, just to be laying some groundwork for eventually making such claims.  That doesn't make a great headline, of course.  The article I used to find the paper gives a more typical take: Climate change killed the aliens, and it will probably kill us too, new simulation suggests.

Well, no.  We're still in the process of figuring out exactly what effect global warming and the resulting climate change will have on our own planet, where we can take direct measurements and build much more accurate models than the authors of the paper put forth.  All we can do for an alien planet is lay out the general range of possibilities, as the authors have done.  Trying to draw conclusions about our own fate from our failure (so far) to detect others like us seems quite premature, whether the hypothetical cause of extinction is war or a ruined environment.



There's a familiar ring to all this.   When nuclear destruction was on everyone's mind, people saw an obvious, if depressing, answer to Fermi's question.  As I recall, papers were published and headlines written.  Now that climate-related destruction is on everyone's mind, people see an obvious, if depressing, answer to Fermi's question, with headlines to match.  It's entirely possible that fifty years from now, if civilization as we know it is still around (as I expect it will be) and we haven't heard directly from an alien civilization (as I suspect we won't), people will see a different obvious, if depressing, answer to Fermi's question.  Papers will be written about it, headlines will do what headlines do, and it will all speak more to our concerns at the time than to the objective state of any alien worlds out there.


I want to be clear here, though.  Just because headlines are overblown doesn't mean there's nothing to worry about.  Overall, nuclear weapons take up a lot less cultural real estate than they did during the height of the cold war, but they're very much still around and just as capable of wreaking widespread devastation.  Climate change was well underway during that period as well, and already recognized as a hazard, but not nearly as prominent in the public consciousness as it is today.

It's tempting to believe in an inverse relationship between the volume of headlines and the actual threat: If they're making a big deal out of it, it's probably nothing to worry about.  But that's an empirical question to be answered by measurement.  It's not a given.  Without actually taking measurements, the safest assumption is the two are unrelated, not inversely related.  That is, how breathless the headlines are is no indication one way or another as to how seriously to take the threat.

My own guess, again without actually measuring, is that there's some correlation between alarming headlines and actual danger.  People study threats and publish their findings.  By and large, and over time, there is significant signal in the noise.  If a range of people working in various disciplines say that something is cause for concern, then it most likely is -- nuclear war and climate change are real risks.  Some part of this discussion finds its way into the popular consciousness, with various shorthands and outright distortions, but if you take the time to read past the headlines and go back to original sources you can get a reasonable picture, and one that will bear at least some resemblance to the headlines.

Going back to original sources and getting the unruly details may not be as satisfying as a nice, punchy one-sentence summary, but I'd say it's worth the effort nonetheless.



(*) A similar but distinct notion is the "Dark forest" hypothesis: They're out there, but they're staying quiet so no one else kills them -- and we had best follow suit.  That's fodder for another post, though I think at least some of this post applies.

No comments:

Post a Comment