Monday, March 20, 2017

Did Dory jump the shark?

I was fortunate enough to attend SIGGRAPH 86 and see the premier of Luxo Jr.   If you haven't seen it, I'd highly recommend you do.  It's only two minutes long.

Luxo Jr. was an eye-opener to me for a number of reasons.  First, and this may be hard to believe now, it was a technical milestone.  At the time, the field of computer graphics was in the process of moving from 3D wireframes like this to something more realistic, and Pixar did a lot of the heavy lifting in that move.

There were a number of problems to be solved at the time.  Some of them had to do with how to render an image of a mathematical model, for example:
  • How to draw exactly what should be visible (hidden-line and hidden-surface removal).  If your model has a cube, an image of that model should only show the faces nearer to you and not the ones on the back -- or anything that's covered by nearer objects in the scene.
  • How to show more realistic textures than just flat polygons.  At first blush you might think that, say, a house is just a few flat walls with windows cut out.  But those walls won't just be flat surfaces.  There might be brick, or siding.  Even a concrete or stucco surface will have little irregularities.  Drawing flat surfaces with uniform colors will convey the overall design, but it won't look like the real thing.
  • How to deal with atmospheric effects.  In real life, there might be smoke or mist in the air.  Even on a clear day distant objects will have more muted colors than nearby ones.
  • How to deal with shiny objects.  Even in the best case, the math for figuring out how bright a particular point on a surface should be is harder for shiny surfaces.  At worst, you have to deal with reflections of other objects, and reflections of reflections, and so on, something like this.
  • How to deal with transparent and translucent objects (which might also be shiny).  Again, this ranges from harder math for the shading to figuring out how the rest of the scene appears when distorted by a curved surface.
  • How to deal with shadows.  If one part of your model is between a light source and another part, that other part will, naturally enough, be darker.
  • A whole slew of subtler optical effects -- color bleeding, depth of field, motion blur, caustics and probably several others I don't remember.  I recall one presenter at a conference half-joking that the whole field had devolved into finding a new optical subtlety and writing a paper about how to render it.
Even if you knew how to render a model accurately, there were thorny questions about modeling:
  • Real scenes contain a whole lot of objects.  Look around next time you're outside -- or inside an average house, office, store or whatever.  A realistic rendering will have to account, somehow, for every blade of grass, every leaf, every feather of every bird, every rock on a gravel path, and so forth.  You don't necessarily have to create a separate object for every detail, but somehow you have to be prepared to render either a green grassy texture or blades of grass, depending on how closely you're looking.  Keep in mind that at that time a typical mobile phone of today would have seemed like a supercomputer (That may seem like hyperbole, but it's not.  The ubiquitous SPARCtation 2, for example, ran at 40MHz with 128MB of RAM)
  • Objects move.  In reality, they obey the laws of physics.  In an entertainment video, they might move in all sorts of non-physical ways, but anything that's supposed to look lifelike had better move more or less like a real-live thing.  Modeling the movement of a piece of clothing, or a full head of hair, or the surface of the ocean, or the flames in a fire, were each good for multiple published papers.
There were (and, I think still are for the most part) two approaches to problems like these:
  • Grinding out exact solutions to the optics (for rendering) and physics (for modeling)
  • Finding Stuff That Works.
At the time, ray-tracing was the state of the art for bashing out the optics, though that would soon be superseded by radiosity -- which had the distinction of being even slower than ray-tracing -- and more sophisticated numerical approaches.  Jim Kajiya laid out a general form for the problem to be solved and demoed an image that used Monte Carlo simulation to produce what he called "a great simulation of film grain" (see the end of this PDF of the paper).  It was a technical tour de force, solving a good chunk of the rendering problems above with one integral equation, using techniques that had been used to model the atomic bomb a generation earlier, among other things.  It was not, however, a very impressive demo unless you knew exactly what to look for.


Pixar took the other, entirely different approach*.  They handled hidden surfaces through what came to be known as "polygon pushing" -- reducing everything to a model with flat sides that was close enough to the real thing.  Flatter parts of surfaces could get by with fewer polygons than curvier parts.  You could then sort those polygons to see which was closest to the eye at any particular point.  Fast sorting algorithms had been around for decades.  Sorting in three dimensions is harder, but it's still possible to do it relatively quickly, even on what was fairly ordinary hardware.

They handled shadows through "shadow mapping", essentially calculating where shadows would fall on a surface and making that a property of the surface.  You could figure out where the shadows would fall by looking at the scene from the point of view of the light source, using the same sorting algorithm as for hidden surfaces.  You only had to re-do the shadow map when things moved, and much of a typical movie scene is background or otherwise not moving.

They handled textures with texture mapping and bump mapping, which treated the surfaces as flat but then modified the color or local orientation used in the actual shading calculations based on what exact part of the surface you were looking at.  That's how the wooden floor in Luxo Jr. was done.

They also developed algorithms for modeling the movement of the lamps and their cords, but I'm less familiar with that.  Overall they built up a library of rendering techniques, modeling techniques and models, some general-purpose and powerful, some specialized to particular tasks.  Just as important, they built a framework to plug it all into harmoniously.

Kajiya's paper was a great example of the scientific approach, and it ended up underpinning a chunk of important work.  It offered only an approximate solution, out of necessity, to the actual problem of putting pixels on the screen but it rigorously defined the exact problem to solve.

Pixar did engineering.  They figured out what mattered and what didn't for the purposes of producing an image that would fool the eye in an entertaining video -- basically which shortcuts people would and wouldn't notice -- and applied their resources to solving the problems that mattered.  They also developed software for managing a server farm doing the rendering and all kinds of tools to support the animators in making their magic.


I suppose I should take a moment to push back against a couple of stereotypes.  It's tempting to write off "the scientific approach" as "of no practical value" or the engineering approach as "just a bunch of hacks".  From what I can tell, though, it's hard to write a useful scientific paper in CS without knowing how to code, and it's hard to come up with a good practical hack without understanding what the full solution looks like.  Both have been done, but most people who've made a difference have a healthy dose of both practical and theoretical knowledge and tend to move back and forth on the deep insight/cheap hack scale as the occasion demands or the mood strikes.


But all this technical discussion leaves out what made, and makes, Pixar truly special.  The Pixar folks didn't just have formidable technical chops and great engineering sense.  They told stories.

This was a conscious decision from the outset.  John Lasseter and the rest of the team paid a lot of attention to the generation of animators before them, particularly the Disney studio.

If you're drawing every single frame of a picture by hand, even if you're using techniques like cel animation to re-use background drawings, you have to make every line count.  The people who we now call the "traditional animators" developed a set of techniques, for example squash and stretch, to illustrate motion without detailing every single movement.  They studied facial expressions in order to make their characters emote in a way we instinctively understand.  They watched how people and animals moved in order to capture the essence of lifelike motion.  They noticed that cute baby animals had (relatively) bigger heads than their adult counterparts, and made countless other observations that went into their work.

If you're just trying to figure out how to shade a model of a teapot by the conference submission deadline you probably won't pay much attention to these things, but the Pixar team did because their goal, from the beginning, was to tell stories with animation.  This is crystal clear from the very start.  The story in Luxo Jr. is pretty simple, but it's clearly a story, with characters with real emotions, even if those characters are metal desk lamps.  In fact, that's the magic: Inanimate, computer-generated desk lamps brought to life -- literally animated.

Watching it at the time was one of those "I didn't realize you could do that" moments, not so much from the technical point of view, though it's technically quite good as well, but because after antiseptic wireframe video games and shiny special effects and endless discussions of ray-tracing vs. polygon pushing it didn't seem like storytelling had much at all to do with the field.


My co-workers and I went to dinner at a steakhouse in Dallas afterward.  I remember talking about what portion of the real-life scene there could be modeled and rendered realistically with the resources available.  Having seen a few papers presented on techniques for rendering transparent objects with curved surfaces I claimed that the wine glasses could be handled OK (not a foregone conclusion at that point).  My boss dipped his thumb in steak juice and smudged it on the glass.  "Render that".  I muttered something about transparency mapping and such, and I might have been right, but the point was made.

With the tools we have these days, that smudge would be a minor obstacle.  Computer-generated scenes still often have that too-clean look to them, but that's more a matter of choice.  Computer imagery can handle grit and grime, but it's often easier to model without it.  If it makes sense for the setting or character, it's there, but otherwise it's usually not.  Also, I suspect, it's easier for an audience to make sense of a scene if the animated main characters look somewhat unnaturally clean and shiny while the trees off in the distance look realistic.


Which brings us to Finding Dory.

In my opinion it's not a bad film, but there's something missing.  Technically, it continues Pixar's upward trend in awesomeness.  The modeling for Hank the Septopus is so seamless you forget all about the huge amount of work that must have gone into it, from the motions of the tentacles to studying enough octopus behavior to make Hank a move like a realistic cephalopod, to knowing enough old-school animation technique to make him expressive within those parameters.  And there's plenty more where that came from.

There are a number of acceptable breaks from reality, starting with talking animals, and on to reading animals, truck-driving animals, aquatic animals spending unlikely amounts of time out of water, and even a plot-convenient echolocation ability that apparently doesn't use ultrasound and works through air as well as water -- not to mention navigating around bends in pipes while still conveying that there are bends at all.  That's all fine.  I mean, if you're OK with talking underwater animals, hard-boiled skepticism is pretty much out the window to start with.

The problem, unfortunately, is the storytelling.

I had to stop here for a bit, partly because, even if I'm a bit of a curmudgeon, I don't really relish the thought of criticizing Dory, Nemo and the gang.  Curmudgeons can still be fans.  Mostly though, I realized that if I wanted to go there, I should at least have a specific reason to go there, and it took me a little while to pinpoint that reason.

In Finding Nemo, one of the best moments, and probably the biggest emotional payoff, is when Dory, the cognitively impaired blue tang who at first seems to have been there for comic relief and to play the role of the wacky, plot-complicating sidekick, realizes "I look at you and…I’m home."**  The setting is as spare as can be, just two characters alone against a plain backdrop, one of them not even speaking, and that's what makes it work: the characters, and their slow realization of what's happened.

Dory didn't see it coming because, well, she's Dory and she had only fleeting hints that she was lost in the first place.  Marlin didn't see it coming because he'd been consumed by his quest to redeem his guilt and remorse over losing Nemo.  The audience didn't see it coming because the rest of the story was zipping along at Pixar's usual frenetic-but-impeccably-timed pace and keeping us engaged with a steady parade of engaging characters.  It also doesn't hurt that Ellen DeGeneres delivers the speech perfectly.

And that's where Finding Dory's trouble begins.  It's just going to be really hard to top a moment like that.  It's probably not a good idea to even try.  If Dory's backstory stays a backstory we can carry it with us however we like.  Probably better to leave that magic alone.  But at the same time, you can't blame Pixar for trying anyway.  "How are you going to top that?" has driven a lot of creative people to a lot of really good work.  I can't imagine there wasn't a little voice in the back of someone's head saying "Challenge accepted."

Meanwhile, rendering and modeling technology march on.  Realistic waves crashing on a beach?  We can do that.  Schools of fish circling in a cylindrical tank?  No problem.  Northern California vegetation in a light mist?  That's the morning commute.  How about some Toy Story-style kids wreaking havoc as they plunge their hands into the touch pool, kicking up clouds of sand?  Done.  It's not that Pixar has ever been shy about pushing the technical envelope.  It just seems a bit -- visible.  Technique is hardly ever meant to be visible.

And of course, the mouse must be fed.  When Dory dodged under Destiny the shark at the last second, I couldn't help thinking "That'll feature somewhere in a Disney ride".  And it's not hard to guess which characters were likely to make for hot-selling plush toys.  Nemo, Marlin, Crush the sea turtle and whoever else have to be there because sequel.

It's not that commercial tie-ins and franchise characters are bad per se.  Those server farms don't run themselves (well, at least not yet).  It's just that, like the technical mastery, the commercial machinery is not supposed to actually jump out at you.


In the end, Finding Dory's weakness boils down to fundamentals: the external constraints are muscling in on the plot, and the plot is driving the characters, when it should be the other way around.  In Luxo Jr., there's hardly any plot at all.  The whole point is to use the technology -- really just a bunch of crunching of a bunch of numbers describing colors, geometric shapes and such -- to show us believable characters.  Character wins, maybe not every single time, but almost always.  That's especially true if you're Pixar, which is why between Luxo Jr. and Finding Dory, that two-minute short is the better film of the two.

Is this the end of Pixar as we know it?  Is it all merchandising and sequels from here on out?  Well, three of the four upcoming Pixar projects with titles are sequels (Cars 3, The Incredibles 2 and Toy Story 4)  Let's hope that Lasseter's pledge that "If we have a great story, we'll do a sequel" holds.  I haven't seen Monsters University or Toy Story 3 (I think), but as I recall Pixar handled Toy Story 2 pretty deftly, sequel though it was.

Really it's impressive that they haven't stumbled any more than they have, all things considered.  But this one definitely feels like a stumble.



* I'm writing most of this from memory, so I'm only mostly confident it's mostly right.  Corrections are welcome.
**Just put Dory home in the search bar, 14 years after Finding Nemo came out, and there it is.

[I still see it on the first page of hits, but there's a lot of Finding Dory mixed in with it now.  Not sure what to make of that --D.H. Mar 2020[

Friday, March 10, 2017

Science on a shoestring

On the other blog I would occasionally put out short notices of neat hacks (as always, "hack" in the "solving problems ingeniously" sense).  I recently ran across one that didn't have much to do with the web, so I thought I'd carry that tradition over to this blog.


Muons are subatomic particles similar to electrons but much heavier.  They are generally produced in high-energy interactions in particle accelerators or from cosmic rays slamming into the atmosphere.  Muons at rest take about 2 microseconds to decay, actually a pretty long time for an unstable particle.  Muons from cosmic ray collisions are moving fast enough that they take measurably longer to decay (in our reference frame), which is one of the many pieces of supporting evidence for special relativity.

The GRAPES-3 detector at Ooty in Tamil Nadu, India detects just such decays using an array of detectors set into a hill 2200m (7200 ft) above sea level.  The detectors themselves are made largely from recycled materials, particularly square metal pipes formerly used in construction projects in Japan.  The total annual budget for the project is under $400,000, but the team has already produced significant results.  Auntie has more details on the construction of the instruments here.

There are a couple of narratives that are often spun around stories like this.  One is a sort of condescending "Isn't that cute?" with maybe a reference to the Professor on Gilligan's Island building a radio out of coconuts.  Another is "Look what people can do without huge budgets.  Why do we need all these multi-billion-dollar projects anyway?"

I'd rather not tell either of those.  What I see here is highly skilled scientists making use of the resources they have available to produce significant results.  Their counterparts at CERN or whatever are making use of different resources to produce different significant results.  Both are moving the ball forward.  There have been plenty of neat hacks at CERN, including something called "HTTP",  but today I wanted to call out GRAPES-3, mainly because it's just plain cool.

Friday, March 3, 2017

Reworking the Drake equation

In speculating about life on other worlds (here and here for example) the Drake Equation provides a useful framework.  This equation multiplies a number of factors to arrive at the number of civilizations in the Milky Way that would be technologically capable of communicating with us.

When it was first formulated, most if not all of the factors had such wide error bars that it's hard to argue that any meaningful number could come out of it.  An answer of the form "2.5 million, but maybe zero and maybe several billion or anything in between", while honest, is not a particularly useful result.  For much of the time the Drake Equation has been around, it's been useful more as a  framework for reasoning about the possibility of alien civilizations (and, in my opinion, a reasonable one) than as a way of producing a meaningful number.

Recently, though, a couple of the error ranges have tightened considerably.  Let's look at the factors in question:
  • the average rate of star formation in our galaxy.  This is currently estimated at 1.5 - 3 stars per year
  • the fraction of formed stars that have planets. This is quite likely near 100%
  • the average number of planets per star that can potentially support life.  There is some dispute over this.  You can find numbers from 0.5 to 4 or 5, and even outside that range.  My personal guess is toward the high end. 
  • the fraction of those planets that actually develop life.  At this point we can only extrapolate from life on Earth, a minimal and biased sample.  It's noteworthy that life now seems to have begun shortly (in geological terms) after suitable conditions arose.
  • the fraction of planets bearing life on which intelligent, civilized life has developed.  Developing intelligent life as we understand it took considerably longer: billions of years.  Again extrapolating from our one known example, this implies that a large fraction of life-bearing planets haven't been around long enough to develop intelligent life.
  • the fraction of these civilizations that have developed technologies that release detectable signals into space.  Still extrapolating, this fraction may be pretty high.   On geological scales, humanity developed radio pretty much instantaneously, suggesting it was nearly inevitable.
  • the length of time, L, over which such civilizations release detectable signals.  I've argued that this is probably quite short (see the links above and the discussion below for a bit more detail).
Looking at the units in those factors, we have
  • civilizations = (stars/time) * (a bunch of fractions that amount to civilizations/star) * time
which is perfectly valid.  However, I'm not sure it's the best match for the problem that we're trying to solve.  I've argued previously that timing is important.  The last factor (length of time a civilization produces detectable signals) takes that into account, but the other time factor, in the rate of star formation, seems less relevant.  There are billions of stars in the galaxy.  At a rate of a couple of stars per year that's not going to change meaningfully over human timescales.

So let's try the same general idea but with different units:
  • expected signal = planets * (expected signal / planet)
First, shift the focus from stars to planets.  For our purposes here that includes objects like planet-sized moons of gas giants.  This cuts out the estimation of star formation and planets per star, since we can now observe planets (in some cases even directly) and get a pretty good count of them.  Or at least we're now guessing about planets directly, instead of guessing about stars and planets.

Then, let's pull back a bit from the details of how a planet would produce a signal of intelligent life, and focus on the signal itself, by estimating how strong a signal we can expect from a given planet.   This consolidates the estimates of life evolving, civilization evolving, civilization developing technology and the duration of any signal into a single factor.

The "expected" means we're looking at weighted probabilities.  To take a familiar example, if you roll a six-sided die and I pay you $10 per pip that comes up, you should expect to get $35 on average and you shouldn't pay more than that to play the game.  This really only holds up if you expect to play the game a number of times.  If you only roll the dice once, you could always just get a bad roll (or a good one).

Likewise, if we say that a planet is producing a signal of a given expected strength, we're saying that's the average strength over all the possibilities for that planet -- maybe it's young with only one-celled life, maybe it's harboring a civilization that's producing radio signals, etc.  We're not claiming that it's actually producing a signal of that strength.  We can get away with this, more or less, because we'll be adding up expectations over a reasonably large number of planets.

Looking at expected signal accounts for a couple of factors.  What a planet emits in the radio spectrum will vary over time.  The raw strength will vary.  Earth has gone from watts to at least gigawatts in the past century or so.  The signal to noise ratio will also vary.  As we make better use of encryption, compression and such, our signal looks more like noise.  Signal strength also accounts for distance.  A radio signal falls off as the square of the distance. 

A given planet will have a particular profile of signal strength over time.  Ours is zero for most of our history, rises significantly as humans develop radio and (I've argued), will drop off significantly as we come to use radio more efficiently and use broadcast less and less.

There are two sources of uncertainty in what strength of signal we would expect to detect, knowing how far away a planet is and how much background noise there is:  We don't know what the signal strength profile for a given planet is, and we don't know where we are in that profile, that is, just how old the planet is at the moment.

For the first uncertainty, the best we can currently do is compare to our experience on earth.  My best guess is that we should expect a very brief blip (brief on planetary scales).  If we expect a blip on the order of hundred years and a planetary age on the order of billions of years, this reduces the expected signal -- again, "expected" in the probabilistic sense -- at any given time to a very low level.  This would be true even if planets occasionally send out strong, targeted transmissions, as ours does.

In the absence of anything better, we can account for the second uncertainty by averaging the signal strength over the expected age of the planet.  That is, we assume the planet could be at any point in its history with equal probability.  In real life, we may be able to do better by looking at factors like the age of the star and the amount of dust around it.

Strictly speaking we should be talking about intervals rather than instants, since listening for a million years is more likely to turn something up than listening for a hundred, but human timescales are tiny enough that this doesn't really affect our calculations of what we should expect with current or near-future technology over our lifetimes.  Either way, we can still define expected signal.

We also need to account for the distribution of planets in space.  If stars were uniformly distributed in space and background noise didn't matter, this would cancel out the effect of decreasing signal strength, since the number of stars at a given distance would increase as the square of the distance.

But they're not.  If they were then the nighttime sky would also be uniformly bright in all directions.  The Milky way is only about a thousand light years thick.  After about half that distance the number of stars increases much more slowly than the square of the distance.  This means we're really looking at a weighted sum of expectations rather than just multiplying planets by expectation per planet, but that doesn't greatly change the overall analysis.

Finally, we should take background noise into account.  As the strength of a signal (actual, not expected strength) drops toward zero, our ability to detect it doesn't drop in tandem.  Once the signal becomes weaker than the general background noise in that part of the sky, our chances of detecting it are already very near zero.  This correction should be applied to the signal profile before averaging over time.

My engineering intuition tells me that the upshot is that we can neglect planets more than a relatively short distance away, say tens of light-years.  At some point background noise will wash everything out.  That's more or less the limit for having a meaningful conversation anyway, since it takes a year for a radio signal to travel a light-year.

So where does that leave us?

Estimating the probability of a detectable signal from a planet requires knowing
  • The distribution of planets as a function of distance.  Our knowledge of this has sharpened dramatically over the past couple of decades.
  • The effect of distance on the strength of a signal we detect.  This is fairly well understood.
  • The background noise for any particular location in the sky.  This is directly observable.
  • The expected strength of the signal emitted by a planet, averaged over its lifetime.  This is where the uncertainty is concentrated.
Essentially we've consolidated all the various fractions of the Drake equation into a single factor and characterized it in terms of signal strength over time (which we then average over time unless we can think of something better).

When searching for life, "signal" doesn't necessarily mean "radio signal".  Soon we will be able to search for signatures such as high levels of oxygen in the atmosphere, which suggest that there is life of a similar form to ours, though not necessarily intelligent, technological or whatever.  This signal would have a much different profile from radio.  In our case it would rapidly jump from zero to full strength relatively early in our history and stay there for billions of years.  It may also be a stronger signal than radio leakage in the sense that we can feasibly detect it from further away.

If we take our experience on earth as a basis, this implies it's quite likely that we'll detect life on other planets, but unlikely that we'll detect radio signals (and probably other smoking-gun signs of civilization as we know it).  Looking for signatures of life in general is probably going to be more informative in any case.  If we don't find any radio signals from other planets, which seems more and more likely, it could just be because even planets with intelligent life don't tend to emit high signal-to-noise radio signals for long.  If we find chemical signatures indicating life on X% of planets with detectable atmospheres, that gives a strong estimate on the probability of life arising in general.  This is true whether X is 0, 100 or something in between.

[Technical note: Somewhat ironically, since I started out talking about unit analysis, the units here are less clear than they might be.  If we're talking about radio, then at any given moment a planet is emitting radio signals at a given power, say X Watts.  Power is energy per unit time.  Probably the most natural way of expressing what we actually detect over time is an amount of energy, say Y Joules -- power times time is energy.  We'd like that to stay the same whether we're talking about an actual measurement or a probabilistic estimate.  So the quantity we're trying to estimate for a given planet is power.

If we assume a particular profile of power over time, and we average it, we're summing up power over time to get total energy, then dividing by the total time span over which we think we might be looking -- the age of the planet -- to get power again.  Accounting for distance still gives power, that is energy we expect to receive per unit time.  Using units of power also accounts for the amount of time we spend looking.  If we look for 100 years we expect to detect 10 times as much signal (energy) as if we look for 10 years.  I tried to gloss over that in the main article on the grounds that the numbers are all likely to be too small to matter.  But it's better to think of a minuscule amount of power over a shorter or longer time than to try to assume everything's an instant.

I've made a few edits to the main article, mainly changing "signal strength" to "signal" in several places to try to reflect this.]

[And having gone through all that, and thought it over a bit more ... the really natural units to use here are bits and bits per second.  At the end of the day, we're trying to glean information from listening to the skies, and information is measured in bits.  This accounts for several troublesome factors:
  • We're trying to estimate detectable information from other planets.  This starts by estimating what information they transmit over time, as measured by an observer in the near vicinity (say, in low Earth orbit or on the Moon in our case)
  • I've argued that as we use compression and encryption more, our signal looks more like noise.  This is quantifiable in terms of bits and bit rates.
  • If a planet is far away or in a noisy area of the sky, we're less likely to detect a signal from it.  There are well-established formulas relating signal power, bandwidth and signal/noise ratios that can be used to translate an estimate of what radio signals a planet emits to an estimate of bits/second we could detect.
  • As above, integrating bits/time over time spent listening gives us the total information we would expect to detect, which is arguably the quantity of interest in the whole exercise.
  • So
    • bits detected = sum over time of the sum over planets of bits per second we expect to detect from each planet
    • leaving out the sums, which don't change the units: bits = (bits/second)/planet * planets * seconds
]