I wanted to "circle back" on a comment I'd made on the "Dark Forest" hypothesis, which is basically the idea that we don't see signs of alien life because everyone's hiding for fear of everyone else. This will probably be the last I want to say about the "Fermi Paradox", at least until the next time I feel like posting about it ...
I haven't read the books the Dark Forest hypothesis comes from [in March 2024, Netflix release 3 Body Problem, based on the first book in the series, and, um ... I have questions] but my understanding is that the hypothesis is based on the observation that when a relatively technologically developed society on Earth has made contact with a less developed society, the results have generally not been pretty. "Technology" in this context particularly means "military technology." The safest assumption is that this isn't unique to our own planet and species, but a consequence of universal factors such as competition for resources.
If you're a civilization at the point of being able to explore the stars, you're probably aware of this first hand from your own history, and the next obvious observation is that you're just at the beginning of the process of exploring the stars. Is it really prudent to assume that there's no one out there more advanced?
Now put yourself in the place of that hypothetical more advanced civilization. They've just detected signs of intelligent life on your world. You are now either a threat to them, or a potential conquest, or both. Maybe you shouldn't be so eager to advertise your presence.
But you don't actually see anyone out there, so there's nothing to worry about, right? Not so fast. Everyone else out there is probably applying the same logic. They might be hunkering down quietly, or they might already be on their way, quietly, in order to get the jump on you, but either way you certainly shouldn't assume that not detecting anyone is good news.
Follow this through and, assuming that intelligent life in general isn't too rare at any given point in time, you get a galaxy dotted with technological civilizations, each doing its best to avoid detection, detect everyone else and, ideally, neutralize any threats that may be out there. Kind of like a Hunger Games scenario set in the middle of a dark forest.
This all seems disturbingly plausible, at least until you take scale into account.
There are two broad classes of scenarios: Either faster-than-light travel is possible, or it's not. If anyone's figured out how to travel faster than light, then all bets are off. The procedure in that case seems pretty simple: Send probes to as many star systems as you can. Have them start off in the outer reaches, unlikely to be detected, scanning for planets, then scanning for life on those planets. If you find anything that looks plunderable, send back word and bring in the troops. Conquer. Build more probes. Repeat.
This doesn't require listening for radio waves as a sign of civilization. Put a telescope and a camera on an asteroid with a suitable orbit and take pictures as it swings by your planet of choice. Or whatever. The main point is if there's anyone out there with that level of technology, our fate is sealed one way or another.
On the other hand, if the speed of light really is a hard and fast limit, then economics will play a significant role. Traveling interstellar distances takes a huge amount of energy and not a little time (from the home planet's point of view -- less for the travelers, particularly if they manage to get near light speed). By contrast, in the period of exploration and conquest from the late 1400s to the late 1700s it was not difficult to build a seaworthy ship and oceans could be crossed in weeks or months using available energy from the wind. The brutal fact is that discovering and exploiting new territories on Earth at that time was economically profitable for the people doing the exploiting.
If your aim is to discover and exploit resources in other star systems, then you have to ask what they might have that you can't obtain on your home system using the very large amount of energy you would have to use to get to the other system. The only sensible answer I can come up with is advanced technology, which assumes that your target is more advanced than you are, in which case you might want to rethink.
Even if your aim is just to conquer other worlds for the evulz or out of some mostly-instinctive drive, you're fighting an extremely uphill battle. Suppose you're attacking a planet 10 light-years away. Messages from the home planet will take 10 years to reach your expeditionary force, and any reply will take another 10 years, so they're effectively on their own.
You detected radio transmissions from your target ten years ago. It takes you at least 10 years to reach their planet (probably quite a bit longer, but let's take the best case, for you at least). They're at least 20 years more advanced than when the signal that led you to plot this invasion left their planet.
You've somehow managed to assemble and send a force of thousands, or tens of thousands, or a million. You're still outnumbered by -- well, you don't really know until you get there, do you? -- but hundreds to one at the least and more likely millions to one. You'd better have a crushing technological advantage.
I could come up with scenarios that might work. Maybe you're able to threaten with truly devastating weapons that the locals have no way to counter. The locals treat with you and agree to become your loyal minions.
Now what?
Unless your goal was just the accomplishment of being able to threaten another species from afar, you'll want to make some sort of physical contact. Presumably you land your population on the planet and colonize, assuming the planet is habitable to you and the local microbes don't see you as an interesting host environment/lunch (or maybe you've mastered the art of fighting microbes, even completely unfamiliar ones).
You're now on unfamiliar territory to which you're not well adapted, outnumbered at least a hundred to one by intelligent and extremely resentful beings that would love to steal whatever technology you're using to maintain your position. Help is twenty years away, counting from the time you send your distress call, and if you're in a position to need it, is the home planet really going to want to send another wave out? By the time they get there, the locals will have had another twenty years to prepare since you sent your distress call, this time with access to at least some of your technology.
I'm always at least a little skeptical of the idea that other civilizations will think like we do. Granted, it doesn't seem too unreasonable to assume that anyone who gets to the point that we would call them "technological" is capable of doing the same kind of cost/benefit analyses that we do. On the other hand, it also seems reasonable to assume that they have the same sort of cognitive biases and blind spots that we do.
The "soft" sciences are a lot about how to model the aggregate behavior of not-completely rational individuals. There's been some progress, but there's an awful lot we don't know even about our own species, which we have pretty good access to. When it comes to hypothetical aliens, I don't see how we can say anything close to "surely they will do thus-and-such", even if there are practical limits on how bonkers you can be and still develop technology on a large scale.
In the context of the Dark Forest, the question is not so much how likely it is that alien species are actually a danger to us, but how likely is it that an alien species would think they were in danger from another alien species (maybe us) and act on that by actively going dark.
Our own case suggests that's not very likely. There may be quite a few people who think that an alien invasion is a serious threat (or for that matter, that one has already happened), or who think that it's unlikely but catastrophic enough if it did happen that we should be prepared. That doesn't seem to have stopped us from spewing radio waves into the universe anyway. Maybe we're the fools and everyone else is smarter, but imagine the level of coordination it would take to keep the entire population of a planet from ever doing anything that would reveal their presence. This seems like a lot to ask, even if the threat of invasion seems likely, which, if you buy the analysis above, it's probably not.
Overall, it seems unlikely that every single technological civilization out there would conclude that staying dark was worth the trouble. At most, I think, there would be fewer detectable civilizations, than there would have been otherwise, but I still think that as far as explaining why we haven't heard from anyone, it's more likely that whatever civilizations there are, have been or will be out there are too far away for our present methods to detect (and may always be), and that the window of opportunity for detecting them is either long past or far in the future.
Saturday, July 28, 2018
The woods are dark and full of terrors
Saturday, July 21, 2018
Fermi on the Fermi paradox
One of the pleasures of life on the modern web is that if you have a question about, say, the history of the Fermi paradox, there's a good chance you can find something on it. In this case, it didn't take long (once I thought to look) to turn up E. M. Jones's "Where is Everybody?" an Account of Fermi's Question.
The article includes letters from Emil Konopinski, Edward Teller and Herbert York, who were all at lunch with Enrico Fermi at Los Alamos National Laboratory some time in the early 1950s when Fermi asked his question. Fermi was wondering specifically about the possibility that somewhere in the galaxy some civilization had developed a viable form of interstellar travel and had gone on to explore the whole galaxy, and therefore our little blue dot out on one of the spiral arms.
Fermi and Teller threw a bunch of arguments at each other, arriving at a variety of probabilities. Fermi eventually concluded that probably interstellar travel just wasn't worth the effort or perhaps no civilization had survived long enough to get to that stage (I'd throw in the possibility that they came by millions of years ago, decided nothing special was going on and left -- or won't come by for a few million years yet).
Along the way Fermi, very much in the spirit of "How many piano tuners are there in Chicago?" broke the problem down into a series of sub-problems such as "the probability of earthlike planets, the probability of life given an earthlike planet" and so forth. Very much something Fermi would have done, (indeed, this sort of exercise goes by the name "Fermi estimation") and very similar to what we now call the Drake equation.
In other words, Fermi and company anticipated much of the subsequent discussion on the subject over lunch more than fifty years ago and then went on to other topics (and presumably coffee). There's been quite a bit of new data on the subject, particularly the recent discovery that there are in fact lots of planets outside our solar system, but the theoretical framework hasn't changed much at all.
The article includes letters from Emil Konopinski, Edward Teller and Herbert York, who were all at lunch with Enrico Fermi at Los Alamos National Laboratory some time in the early 1950s when Fermi asked his question. Fermi was wondering specifically about the possibility that somewhere in the galaxy some civilization had developed a viable form of interstellar travel and had gone on to explore the whole galaxy, and therefore our little blue dot out on one of the spiral arms.
Fermi and Teller threw a bunch of arguments at each other, arriving at a variety of probabilities. Fermi eventually concluded that probably interstellar travel just wasn't worth the effort or perhaps no civilization had survived long enough to get to that stage (I'd throw in the possibility that they came by millions of years ago, decided nothing special was going on and left -- or won't come by for a few million years yet).
Along the way Fermi, very much in the spirit of "How many piano tuners are there in Chicago?" broke the problem down into a series of sub-problems such as "the probability of earthlike planets, the probability of life given an earthlike planet" and so forth. Very much something Fermi would have done, (indeed, this sort of exercise goes by the name "Fermi estimation") and very similar to what we now call the Drake equation.
In other words, Fermi and company anticipated much of the subsequent discussion on the subject over lunch more than fifty years ago and then went on to other topics (and presumably coffee). There's been quite a bit of new data on the subject, particularly the recent discovery that there are in fact lots of planets outside our solar system, but the theoretical framework hasn't changed much at all.
What's a Fermi paradox?
So far, we haven't detected strong, unambiguous signs of extraterrestrial intelligence. Does that mean there isn't any?
The usual line of attack for answering this question is the Drake equation [but see the next post for a bit on its origins --D.H Oct 2018], which breaks the question of "How many intelligent civilizations are there in our galaxy?" down into a series of factors that can then be estimated and combined into an overall estimate.
Let's take a simpler approach here.
The probability of detecting extraterrestrial intelligence given our efforts so far is the product of:
At first blush, the logic of the Fermi paradox seems airtight: Aliens are out there. We'd see them if they were out there. We haven't seen them. QED. But we're not doing a mathematical proof here. We're dealing in probabilities (also math, but a different kind). We're not trying to explain a mathematically impossible result. We're trying to determine how likely it is that our observations are compatible with life being out there.
I was going to go into a longish excursion into Bayesian inference here, but ended up realizing I'm not very adept at it (note to self: get better at Bayesian inference). So in the spirit of keeping it at least somewhat simple, let's look at a little badly-formatted table with, granted, a bunch of symbols that might not be familiar:
P is for probability. P(L) is the probability that there's intelligent life out there we could hope to detect as such, at all. P(S) is the probability that we see evidence strong enough that the scientific community (whatever we mean by that, exactly) agrees that intelligent life is out there. The ¬ symbol means "not" and the ∧ symbol means "and". The rows sum to the right, so
My personal guess is that we tend to overestimate the second of the two bullet points at the beginning. There are good reasons to think that life on other planets is hard to detect, and our efforts so far have been limited. In this view, the probability that detectably intelligent life is out there right now is fairly low, even if the chance of intelligent life being out there somewhere in the galaxy is very high and the chance of it being out there somewhere in the observable universe is near certain.
As I've argued before, there aren't a huge number of habitable planets close enough that we could hope to detect intelligent life on them, and there's a good chance that we're looking at the wrong time in the history of those planets -- either intelligent life hasn't developed yet or it has but for one reason or another it's gone dark.
If I find out I'll let you know.
[As noted above, I did in fact come across something --D.H Oct 2018]
The usual line of attack for answering this question is the Drake equation [but see the next post for a bit on its origins --D.H Oct 2018], which breaks the question of "How many intelligent civilizations are there in our galaxy?" down into a series of factors that can then be estimated and combined into an overall estimate.
Let's take a simpler approach here.
The probability of detecting extraterrestrial intelligence given our efforts so far is the product of:
- The probability it exists
- The probability that what we've done so far would detect it, given that it exists
(For any math geeks out there, this is just the definition of conditional probability)
Various takes on the Fermi paradox (why haven't we seen anyone, if we're pretty sure they're out there?) address these two factors
- Maybe intelligent life is just a very rare accident. As far as we can tell, Earth itself has lacked intelligent life for almost all of its history (one could argue it still does, so feel free to substitute "detectable" for "intelligent").
- Maybe intelligent life is hard to detect for most of the time it's around (See this post for an argument to that effect and this one for a bit on the distinction between "intelligent" and "detectable"). A particularly interesting take on this is the "dark forest" hypothesis, that intelligent civilizations soon figure out that being detectable is dangerous and deliberately go dark, hoping never to be seen again. I mean to take this one on in a bit, but not here.
- One significant factor when it comes to detecting signs of anything, intelligent or otherwise: as far as we know detectability drops with the square of distance, that is, twice as far away means four times harder to detect. Stars are far away. Other galaxies are really far away.
- Maybe intelligent life is apt to destroy itself soon after it develops, so it's not going to be detectable for very long and chances are we won't have been looking when they were there . This is a popular theme in the literature. I've talked about it here and here.
- Maybe the timing is just wrong. Planetary time scales are very long. Maybe we're one of the earlier ones and life won't develop on nearby planets for another million or billion years (basically low probability of detection again, but also an invitation to be more rigorous about the role of timing)
At first blush, the logic of the Fermi paradox seems airtight: Aliens are out there. We'd see them if they were out there. We haven't seen them. QED. But we're not doing a mathematical proof here. We're dealing in probabilities (also math, but a different kind). We're not trying to explain a mathematically impossible result. We're trying to determine how likely it is that our observations are compatible with life being out there.
I was going to go into a longish excursion into Bayesian inference here, but ended up realizing I'm not very adept at it (note to self: get better at Bayesian inference). So in the spirit of keeping it at least somewhat simple, let's look at a little badly-formatted table with, granted, a bunch of symbols that might not be familiar:
We see life (S) | We don't see life (¬S) | ||
Life exists (L) | P(L ∧ S) | P(L ∧ ¬S) | P(L) |
No life (¬L) | P(¬L ∧ S) | P(¬L ∧ ¬S) | P(¬L) |
P(S) | P(¬S) | 100% |
P is for probability. P(L) is the probability that there's intelligent life out there we could hope to detect as such, at all. P(S) is the probability that we see evidence strong enough that the scientific community (whatever we mean by that, exactly) agrees that intelligent life is out there. The ¬ symbol means "not" and the ∧ symbol means "and". The rows sum to the right, so
- P(L ∧ S) + P(L ∧ ¬S) = P(L) (the probability life exists is the probability that life exists and we see it plus the probability it exists and we don't see it)
- P(S + ¬S) = 100% (either we see life or we don't see it)
Likewise the columns sum downward. Also "and" means multiply (as long as the two probabilities are independent; they are here, since we allow for false positives), so P(L ∧ S) = P(L)×P(S). This all puts restrictions on what numbers you can fill in. Basically you can pick any three and those determine the rest.
Suppose you think it's likely that life exists, and you think that it's likely that we'll see it if it's there. That means you think P(L) is close to 100% and P(L ∧ S) is a little smaller but also close to 100% (see conditional probability for more details) . You get to pick one more. It actually turns out not to matter that much, since we've already decided that life is both likely and likely to be detected. One choice would be P(¬L ∧ S), the chance of a "false positive", that is, the chance that there's no life out there but we think we see it anyway. Again, in this scenario we're assuming false positives should be unlikely overall, but choosing exactly how unlikely locks in the rest of the numbers.
It's probably worth calling out one point that kept coming up while I was putting this post together: The chances of finding signs of life depend on how much we've looked and how we've done it. A lot of SETI has centered around radio waves, and in particular radio waves in a fairly narrow range of frequencies. There are perfectly defensible reasons for this approach, but that doesn't mean that any actual ETs out there are broadcasting on those frequencies. In any case we're only looking at a small portion of the sky at any given moment, our current radio dishes can only see a dozen or two light years out and there's a lot of radio noise from our own technological society to filter out.
I could model this as a further conditional probability, but it's probably best just to keep in mind that P(S) is the probability of having detected life after everything we've done so far, and so includes the possibility that we haven't really done much so far.
To make all this concrete, let's take an optimistic scenario: Suppose you think there's a 90% chance that life is out there and a 95% chance we'll see it if it's out there. If there's no chance of a false positive, then there's an 85.5% chance that we'll see signs of life and so a 14.5% chance we won't (as is presently the case, at least as far as the scientific community is concerned). If you think there's a 50% chance of a false positive, then there's a 90.5% chance we'll see signs of life, including the 5% chance it's not out there but we see it anyway. That means a 9.5% chance of not seeing it, whether or not it's actually there.
This doesn't seem particularly paradoxical to me. We think life is likely. We think we're likely to spot it. So far we haven't. By the assumptions above, there's about a 10% chance of that outcome. You generally need 99.99994% certainty to publish a physics paper, that is, a 0.00006% chance of being wrong. A 9.5% chance isn't even close to that
Only if you're extremely optimistic and you think that it's overwhelmingly likely that detectable intelligent life is out there, and that we've done everything possible to detect it do we see a paradox in the sense that our present situation seems very unlikely. But when I say "overwhelmingly likely" I mean really overwhelmingly likely. For example, even if you think both are 99% likely, then there's still about a 1-2% chance of not seeing evidence of life, depending on how likely you think false positives are. If, on the other hand, you think it's unlikely that we could detect intelligent life even if it is out there, there's nothing like a paradox at all.
My personal guess is that we tend to overestimate the second of the two bullet points at the beginning. There are good reasons to think that life on other planets is hard to detect, and our efforts so far have been limited. In this view, the probability that detectably intelligent life is out there right now is fairly low, even if the chance of intelligent life being out there somewhere in the galaxy is very high and the chance of it being out there somewhere in the observable universe is near certain.
As I've argued before, there aren't a huge number of habitable planets close enough that we could hope to detect intelligent life on them, and there's a good chance that we're looking at the wrong time in the history of those planets -- either intelligent life hasn't developed yet or it has but for one reason or another it's gone dark.
Finding out that there are potentially habitable worlds in our own solar system is exciting, but probably doesn't change the picture that much. There could well be a technological civilizations in the oceans of Enceladus, but proving that based on what molecules we see puffing out of vents on the surface many kilometers above said ocean seems like a longshot.
With that in mind, let's put some concrete numbers behind a less optimistic scenario. If there's a 10% chance of detectable intelligent life (as opposed to intelligent life we don't currently know how to detect), and there's a 5% chance we'd have detected it based on what we've done so far and a 1% chance of a false positive (that is, of the scientific community agreeing that life is out there when in fact it's not), then it's 98.6% likely we wouldn't have seen clear signs of life by now. That seems fine.
While I'm conjecturing intermittently here, my own wild guess is that it's quite likely that some kind of detectable life is out there, something that, while we couldn't unequivocally say it was intelligent, would make enough of an impact on its home world that we could hope to say "that particular set of signatures is almost certainly due to something we would call life". I'd also guess that it's pretty likely that in the next, say, 20 or 50 or 100 years we would have searched enough places with enough instrumentation to be pretty confident of finding something if it's there. And it's reasonably likely that we'd get a false positive in the form of something that people would be convinced it was a sign of life when there in fact wasn't -- maybe we'd figure out our mistake in another 20 or 50 or 100 years.
Let's say life of some sort is 90% likely, there's a 95% chance of finding it in the next 100 years if it's there and a 50% chance of mistakenly finding life when it's not there, that is, a 50% chance that at some point over those 100 years we mistakenly convince ourselves we've found life and later turn out to be wrong. Who knows? False positives are based on the idea that there's no detectable life out there, which is another question mark. But let's go with it.
I actually just ran those numbers a few paragraphs ago and came up with a 9.5% chance of not finding anything, even with those fairly favorable odds.
All in all, I'd say we're quite a ways from any sort of paradoxical result.
One final thought occurs to me: The phrase "Fermi paradox" has been in the lexicon for quite a while, long enough to have taken on a meaning of its own. Fermi himself, being one of the great physicists, was quite comfortable with uncertainty and approximation, so much so that the kind of "How many piano tuners are there in Chicago?" questions given to interview candidates are meant to be solved by "Fermi estimation".
I should go back and get Fermi's own take on the "Fermi paradox". My guess was he wasn't too bothered by it and probably put it down to some combination of "we haven't really looked" and "maybe they're not out there".
[As noted above, I did in fact come across something --D.H Oct 2018]
Friday, July 6, 2018
Are we alone in the face of uncertainty?
I keep seeing articles on the Drake equation and the Fermi Paradox on my news feed, and since I tend to click through and read them, I keep getting more of them. And since I find at least some of the ideas interesting, I keep blogging about them. So there will probably be a few more posts on this topic. Here's one.
One of the key features of the Drake equation is how little we know, even now, about most of the factors. Along these lines, a recent (preprint) paper by Anders Sandberg, Eric Drexler and Toby Ord claims to "dissolve" the Fermi Paradox (with so many other stars out there why haven't we heard from them?), claiming to find "a substantial ex ante probability of there being no other intelligent life in our observable universe".
As far as I can make out, "ex ante" (from before) means something like "before we gather any further evidence by trying to look for life". In other words, there's no particular reason to believe there should be other intelligent life in the universe, so we shouldn't be surprised that we haven't found any.
I'm not completely confident that I understand the analysis correctly, but to the extent I do, I believe it goes like this (you can probably skip the bullet points if math makes your head hurt -- honestly, some of this makes my head hurt):
I'm not sure how much of this I buy.
One of the key features of the Drake equation is how little we know, even now, about most of the factors. Along these lines, a recent (preprint) paper by Anders Sandberg, Eric Drexler and Toby Ord claims to "dissolve" the Fermi Paradox (with so many other stars out there why haven't we heard from them?), claiming to find "a substantial ex ante probability of there being no other intelligent life in our observable universe".
As far as I can make out, "ex ante" (from before) means something like "before we gather any further evidence by trying to look for life". In other words, there's no particular reason to believe there should be other intelligent life in the universe, so we shouldn't be surprised that we haven't found any.
I'm not completely confident that I understand the analysis correctly, but to the extent I do, I believe it goes like this (you can probably skip the bullet points if math makes your head hurt -- honestly, some of this makes my head hurt):
- We have very little knowledge of the some of the factors in the Drake equation, particularly fl (probability of life on a planet that might support life) fi (probability of a planet with life developing intelligent life) and L (the length of time a civilization produces a detectable signal)
- Estimates of those range over orders of magnitude.
- Estimates for L range from 50 years to a billion or even 10 billion years.
- The authors do some modeling and come up with a range of uncertainty of 50 orders of magnitude for fl. That is, it might be close to 1 (that is, close to 100% certain), or it might be more like 1 in 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000. Likewise they take fi to range over three orders of magnitude, from near 1 to 1 in 1,000.
- Rather than assigning a single number to every term, as most authors do, it makes more sense to assign a probability distribution. That is, instead of saying "the probability of life arising on a suitable planet is 90%", or 0.01% or whatever, assign probability for each possible value (the actual math is a bit more subtle, but that should do for our purposes). Maybe the most likely probability of life developing intelligence is 1 in 20, but there's a possibility, though not as likely, that it's actually 1 in 10 or 1 in 100, so take that into account with a probability distribution..
- (bear in mind that the numbers were looking at are themselves probabilities, so we're assigning a probability that the probability is a given number -- this is the part that makes my head hurt a bit)
- Since we're looking very wide ranges of values, a reasonable distribution is the "log normal" distribution -- basically "the number of digits fits a bell curve".
- These distributions have very long tails, meaning that if, say, 1 in a thousand is a likely value for the chance of life evolving into intelligent life, then (depending on the exact parameters) 1 in a million may be reasonably likely, 1 in a billion not too unlikely and 1 in trillion is not out of the question.
- The factors in the Drake equation multiply, following the rules of probability, so it's quite possible that the aggregate result is very small.
- For example if it's reasonably likely that fl is 1 in a trillion and fi is 1 in a million, then we can't ignore the chance that the product of the two is 1 in a quintillion.
- Numbers like that would make it unlikely that there is any life in our galaxy's few hundred billion stars and that ours just happened to get lucky.
- Putting it all together, they estimate that there's a significant chance that we're alone in the observable universe.
I'm not sure how much of this I buy.
There are two levels of probability here. The terms in the Drake equation represent what has actually happened in the universe. An omniscient observer that knew the entire history of every planet in the universe (and exactly what was meant by "life" and "intelligent") could count the number of planets, the number that had developed life and so forth and calculate the exact values of each factor in the equation.
The probability distributions in the paper, as I understand it, represent our ignorance of these numbers. For all we know, the portion of "habitable" planets with intelligent life is near 100%, or near 1 in a quintillion or even lower. If that's the case, then the paper is exploring to what extent our current knowledge is compatible with there being no other life in the universe. The conclusion is that the two are fairly compatible -- if you start with what (very little) we know about the likelihood of life and so forth, there's a decent chance that the low estimates are right, or even too optimistic, and there's no one but us.
Why? Because low probabilities are more plausible than we think, and multiplying probabilities increases that effect. Again, the math is a bit subtle, but if you have a long chain of contingencies, any one of them failing breaks the whole chain. If you have several unlikely links in the chain, the chances of the chain breaking are even better.
The conclusion -- that for all we know life might be extremely rare -- seems fine. It's the methodology that makes me a bit queasy.
I've always found the Drake equation a bit long-winded. Yes, the probability of intelligent life evolving on a planet is the probability of life evolving at all multiplied by the probability of life evolving into intelligent life, but does that really help?
On the one hand, it seems reasonable to separate the two. As far as we know it took billions of years to go from one to the other, so clearly they're two different things.
But we don't really know the extent of our uncertainty about these things. If you ask for an estimate of any quantity like this, or do your own estimate based on various factors, you'll likely* end up with something in the wide range of values people consider plausible enough to publish (I'm hoping to say more on this theme in a future post). No one is going to say "zero ... absolutely no chance" in a published paper, so it's a matter of deriving a plausible really small number consistent given our near-complete ignorance of the real number -- no matter what that particular number represents or how many other numbers it's going to be combined with.
You could almost certainly fit the results of surveying several good-faith attempts into a log-normal distribution. Log-normal distributions are everywhere, particularly where the normal normal distribution doesn't fit because the quantity being measured has something exponential about it -- say, you're multiplying probabilities or talking about orders of magnitude.
If the question is "what is the probability of intelligent life evolving on a habitable planet?" without any hints as to how to calculate it, that is, one not-very-well-determined number rather than two, then the published estimates, using various methodologies, should range from a small fraction to fairly close to certainty depending on the assumptions used by the particular authors. You could then plug these into a log normal distribution and get some representation of our uncertainty about the overall question, regardless of how it's broken down.
You could just as well ask "What is the probability of any self-replicating system arising on a habitable planet?", "What is the probability of a self-replicating system evolving into cellular life?" "What is the probability of cellular life evolving into multicellular life?" and so forth, that is, breaking the problem down into several not-very-well-determined numbers. My strong suspicion is that the distribution for any one of those sub-parts will look a lot like the distribution for the one-question version, or the parts of the two-question version, because they're basically the same kind of guess as any answer to the overall question. The difference is just in how many guesses your methodology requires you to make.
In particular, I seriously doubt that anyone is going to cross-check that pulling together several estimates is going to yield the same distribution, even approximately, as what's implied by a single overall estimate. Rather, the more pieces you break the problem into, the more likely really small numbers become, as seen in the paper.
I think this is consistent with the view that the paper is quantifying our uncertainty. If the methodology for estimating the number of civilizations requires you to break your estimate into pieces, each itself with high uncertainty, you'll get an overall estimate with very high uncertainty. The conclusion "we're likely to be alone" will lie within that extremely broad range, and may even take up a sizable chunk of it. But again, I think this says much more about our uncertainty than about the actual answer.
I suspect that if you surveyed estimates of how likely intelligent life is using any and all methodologies*, the distribution would imply that we're not likely to be alone, even if intelligent life is very rare. If you could find estimates of fine-grained questions like "what is the probability of multicellular life given cellular life?" you might well get a distribution that implied we're an incredibly unlikely fluke and really shouldn't be here at all. In other words, I don't think the approach taken in the paper is likely to be robust in the face of differing methodologies. If it's not, it's hard to draw any conclusions from it about the actual likelihood of life.
I'm not even sure, though, how feasible it would be to survey a broad sample of methodologies. The Drake formulation dominates discussion, and that itself says something. What estimates are available to survey depends on what methods people tend to use, and that in turn depends on what's likely to get published. It's not like anyone somehow compiled a set of possible ways to estimate the likelihood of intelligent life and prospective authors each picked one at random.
The more I ponder this, the more I'm convinced that the paper is a statement about the Drake equation and our uncertainty in calculating the left hand side from the right. It doesn't "dissolve" the Fermi paradox so much as demonstrate that we don't really know if there's a paradox or not. The gist of the paradox is "If intelligent life is so likely, why haven't we heard from anyone?", but we really have no clear idea how likely intelligent life is.
* So I'm talking about probabilities of probabilities about probabilities?
Subscribe to:
Posts (Atom)