When it was first formulated, most if not all of the factors had such wide error bars that it's hard to argue that any meaningful number could come out of it. An answer of the form "2.5 million, but maybe zero and maybe several billion or anything in between", while honest, is not a particularly useful result. For much of the time the Drake Equation has been around, it's been useful more as a framework for reasoning about the possibility of alien civilizations (and, in my opinion, a reasonable one) than as a way of producing a meaningful number.
Recently, though, a couple of the error ranges have tightened considerably. Let's look at the factors in question:
- the average rate of star formation in our galaxy. This is currently estimated at 1.5 - 3 stars per year
- the fraction of formed stars that have planets. This is quite likely near 100%
- the average number of planets per star that can potentially support life. There is some dispute over this. You can find numbers from 0.5 to 4 or 5, and even outside that range. My personal guess is toward the high end.
- the fraction of those planets that actually develop life. At this point we can only extrapolate from life on Earth, a minimal and biased sample. It's noteworthy that life now seems to have begun shortly (in geological terms) after suitable conditions arose.
- the fraction of planets bearing life on which intelligent, civilized life has developed. Developing intelligent life as we understand it took considerably longer: billions of years. Again extrapolating from our one known example, this implies that a large fraction of life-bearing planets haven't been around long enough to develop intelligent life.
- the fraction of these civilizations that have developed technologies that release detectable signals into space. Still extrapolating, this fraction may be pretty high. On geological scales, humanity developed radio pretty much instantaneously, suggesting it was nearly inevitable.
- the length of time, L, over which such civilizations release detectable signals. I've argued that this is probably quite short (see the links above and the discussion below for a bit more detail).
Looking at the units in those factors, we have
- civilizations = (stars/time) * (a bunch of fractions that amount to civilizations/star) * time
which is perfectly valid. However, I'm not sure it's the best match for the problem that we're trying to solve. I've argued previously that timing is important. The last factor (length of time a civilization produces detectable signals) takes that into account, but the other time factor, in the rate of star formation, seems less relevant. There are billions of stars in the galaxy. At a rate of a couple of stars per year that's not going to change meaningfully over human timescales.
So let's try the same general idea but with different units:
- expected signal = planets * (expected signal / planet)
First, shift the focus from stars to planets. For our purposes here that includes objects like planet-sized moons of gas giants. This cuts out the estimation of star formation and planets per star, since we can now observe planets (in some cases even directly) and get a pretty good count of them. Or at least we're now guessing about planets directly, instead of guessing about stars and planets.
Then, let's pull back a bit from the details of how a planet would produce a signal of intelligent life, and focus on the signal itself, by estimating how strong a signal we can expect from a given planet. This consolidates the estimates of life evolving, civilization evolving, civilization developing technology and the duration of any signal into a single factor.
The "expected" means we're looking at weighted probabilities. To take a familiar example, if you roll a six-sided die and I pay you $10 per pip that comes up, you should expect to get $35 on average and you shouldn't pay more than that to play the game. This really only holds up if you expect to play the game a number of times. If you only roll the dice once, you could always just get a bad roll (or a good one).
Likewise, if we say that a planet is producing a signal of a given expected strength, we're saying that's the average strength over all the possibilities for that planet -- maybe it's young with only one-celled life, maybe it's harboring a civilization that's producing radio signals, etc. We're not claiming that it's actually producing a signal of that strength. We can get away with this, more or less, because we'll be adding up expectations over a reasonably large number of planets.
Likewise, if we say that a planet is producing a signal of a given expected strength, we're saying that's the average strength over all the possibilities for that planet -- maybe it's young with only one-celled life, maybe it's harboring a civilization that's producing radio signals, etc. We're not claiming that it's actually producing a signal of that strength. We can get away with this, more or less, because we'll be adding up expectations over a reasonably large number of planets.
Looking at expected signal accounts for a couple of factors. What a planet emits in the radio spectrum will vary over time. The raw strength will vary. Earth has gone from watts to at least gigawatts in the past century or so. The signal to noise ratio will also vary. As we make better use of encryption, compression and such, our signal looks more like noise. Signal strength also accounts for distance. A radio signal falls off as the square of the distance.
A given planet will have a particular profile of signal strength over time. Ours is zero for most of our history, rises significantly as humans develop radio and (I've argued), will drop off significantly as we come to use radio more efficiently and use broadcast less and less.
There are two sources of uncertainty in what strength of signal we would expect to detect, knowing how far away a planet is and how much background noise there is: We don't know what the signal strength profile for a given planet is, and we don't know where we are in that profile, that is, just how old the planet is at the moment.
For the first uncertainty, the best we can currently do is compare to our experience on earth. My best guess is that we should expect a very brief blip (brief on planetary scales). If we expect a blip on the order of hundred years and a planetary age on the order of billions of years, this reduces the expected signal -- again, "expected" in the probabilistic sense -- at any given time to a very low level. This would be true even if planets occasionally send out strong, targeted transmissions, as ours does.
In the absence of anything better, we can account for the second uncertainty by averaging the signal strength over the expected age of the planet. That is, we assume the planet could be at any point in its history with equal probability. In real life, we may be able to do better by looking at factors like the age of the star and the amount of dust around it.
In the absence of anything better, we can account for the second uncertainty by averaging the signal strength over the expected age of the planet. That is, we assume the planet could be at any point in its history with equal probability. In real life, we may be able to do better by looking at factors like the age of the star and the amount of dust around it.
Strictly speaking we should be talking about intervals rather than instants, since listening for a million years is more likely to turn something up than listening for a hundred, but human timescales are tiny enough that this doesn't really affect our calculations of what we should expect with current or near-future technology over our lifetimes. Either way, we can still define expected signal.
We also need to account for the distribution of planets in space. If stars were uniformly distributed in space and background noise didn't matter, this would cancel out the effect of decreasing signal strength, since the number of stars at a given distance would increase as the square of the distance.
But they're not. If they were then the nighttime sky would also be uniformly bright in all directions. The Milky way is only about a thousand light years thick. After about half that distance the number of stars increases much more slowly than the square of the distance. This means we're really looking at a weighted sum of expectations rather than just multiplying planets by expectation per planet, but that doesn't greatly change the overall analysis.
Finally, we should take background noise into account. As the strength of a signal (actual, not expected strength) drops toward zero, our ability to detect it doesn't drop in tandem. Once the signal becomes weaker than the general background noise in that part of the sky, our chances of detecting it are already very near zero. This correction should be applied to the signal profile before averaging over time.
But they're not. If they were then the nighttime sky would also be uniformly bright in all directions. The Milky way is only about a thousand light years thick. After about half that distance the number of stars increases much more slowly than the square of the distance. This means we're really looking at a weighted sum of expectations rather than just multiplying planets by expectation per planet, but that doesn't greatly change the overall analysis.
Finally, we should take background noise into account. As the strength of a signal (actual, not expected strength) drops toward zero, our ability to detect it doesn't drop in tandem. Once the signal becomes weaker than the general background noise in that part of the sky, our chances of detecting it are already very near zero. This correction should be applied to the signal profile before averaging over time.
My engineering intuition tells me that the upshot is that we can neglect planets more than a relatively short distance away, say tens of light-years. At some point background noise will wash everything out. That's more or less the limit for having a meaningful conversation anyway, since it takes a year for a radio signal to travel a light-year.
So where does that leave us?
Estimating the probability of a detectable signal from a planet requires knowing
- The distribution of planets as a function of distance. Our knowledge of this has sharpened dramatically over the past couple of decades.
- The effect of distance on the strength of a signal we detect. This is fairly well understood.
- The background noise for any particular location in the sky. This is directly observable.
- The expected strength of the signal emitted by a planet, averaged over its lifetime. This is where the uncertainty is concentrated.
Essentially we've consolidated all the various fractions of the Drake equation into a single factor and characterized it in terms of signal strength over time (which we then average over time unless we can think of something better).
When searching for life, "signal" doesn't necessarily mean "radio signal". Soon we will be able to search for signatures such as high levels of oxygen in the atmosphere, which suggest that there is life of a similar form to ours, though not necessarily intelligent, technological or whatever. This signal would have a much different profile from radio. In our case it would rapidly jump from zero to full strength relatively early in our history and stay there for billions of years. It may also be a stronger signal than radio leakage in the sense that we can feasibly detect it from further away.
If we take our experience on earth as a basis, this implies it's quite likely that we'll detect life on other planets, but unlikely that we'll detect radio signals (and probably other smoking-gun signs of civilization as we know it). Looking for signatures of life in general is probably going to be more informative in any case. If we don't find any radio signals from other planets, which seems more and more likely, it could just be because even planets with intelligent life don't tend to emit high signal-to-noise radio signals for long. If we find chemical signatures indicating life on X% of planets with detectable atmospheres, that gives a strong estimate on the probability of life arising in general. This is true whether X is 0, 100 or something in between.
[Technical note: Somewhat ironically, since I started out talking about unit analysis, the units here are less clear than they might be. If we're talking about radio, then at any given moment a planet is emitting radio signals at a given power, say X Watts. Power is energy per unit time. Probably the most natural way of expressing what we actually detect over time is an amount of energy, say Y Joules -- power times time is energy. We'd like that to stay the same whether we're talking about an actual measurement or a probabilistic estimate. So the quantity we're trying to estimate for a given planet is power.
If we assume a particular profile of power over time, and we average it, we're summing up power over time to get total energy, then dividing by the total time span over which we think we might be looking -- the age of the planet -- to get power again. Accounting for distance still gives power, that is energy we expect to receive per unit time. Using units of power also accounts for the amount of time we spend looking. If we look for 100 years we expect to detect 10 times as much signal (energy) as if we look for 10 years. I tried to gloss over that in the main article on the grounds that the numbers are all likely to be too small to matter. But it's better to think of a minuscule amount of power over a shorter or longer time than to try to assume everything's an instant.
I've made a few edits to the main article, mainly changing "signal strength" to "signal" in several places to try to reflect this.]
[And having gone through all that, and thought it over a bit more ... the really natural units to use here are bits and bits per second. At the end of the day, we're trying to glean information from listening to the skies, and information is measured in bits. This accounts for several troublesome factors:
[Technical note: Somewhat ironically, since I started out talking about unit analysis, the units here are less clear than they might be. If we're talking about radio, then at any given moment a planet is emitting radio signals at a given power, say X Watts. Power is energy per unit time. Probably the most natural way of expressing what we actually detect over time is an amount of energy, say Y Joules -- power times time is energy. We'd like that to stay the same whether we're talking about an actual measurement or a probabilistic estimate. So the quantity we're trying to estimate for a given planet is power.
If we assume a particular profile of power over time, and we average it, we're summing up power over time to get total energy, then dividing by the total time span over which we think we might be looking -- the age of the planet -- to get power again. Accounting for distance still gives power, that is energy we expect to receive per unit time. Using units of power also accounts for the amount of time we spend looking. If we look for 100 years we expect to detect 10 times as much signal (energy) as if we look for 10 years. I tried to gloss over that in the main article on the grounds that the numbers are all likely to be too small to matter. But it's better to think of a minuscule amount of power over a shorter or longer time than to try to assume everything's an instant.
I've made a few edits to the main article, mainly changing "signal strength" to "signal" in several places to try to reflect this.]
[And having gone through all that, and thought it over a bit more ... the really natural units to use here are bits and bits per second. At the end of the day, we're trying to glean information from listening to the skies, and information is measured in bits. This accounts for several troublesome factors:
- We're trying to estimate detectable information from other planets. This starts by estimating what information they transmit over time, as measured by an observer in the near vicinity (say, in low Earth orbit or on the Moon in our case)
- I've argued that as we use compression and encryption more, our signal looks more like noise. This is quantifiable in terms of bits and bit rates.
- If a planet is far away or in a noisy area of the sky, we're less likely to detect a signal from it. There are well-established formulas relating signal power, bandwidth and signal/noise ratios that can be used to translate an estimate of what radio signals a planet emits to an estimate of bits/second we could detect.
- As above, integrating bits/time over time spent listening gives us the total information we would expect to detect, which is arguably the quantity of interest in the whole exercise.
- So
- bits detected = sum over time of the sum over planets of bits per second we expect to detect from each planet
- leaving out the sums, which don't change the units: bits = (bits/second)/planet * planets * seconds
Re: time factors: We're looking not for the time during which a civilization emits a detectable signal, but the overlap between that time (probably quite short) and the time we're in a position to detect it (maybe even shorter).
ReplyDeleteYes. Averaging over a planetary timescale is an approximation in the absence of any information about how old the planet is. If we know more, we can make better approximations. For example, if we detect convincing evidence of life in general, we might up the estimate, though, based on our experience on Earth, perhaps not by much.
ReplyDeleteSimilarly, treating our observation period as an instant is an approximation. Treating it as an interval changes the math a bit, but not the overall structure.
In sum, yes, we're looking at the expected overlap, but at the moment we have only very crude tools for estimating it. So much so that it's probably more interesting to work backwards from the number of planets we end up observing with suitable atmospheric signatures to the timing of life developing on a typical planet.