In 1986, so about 40 years ago, K. Eric Drexler's Engines of Creation was published. It was a pretty big deal at the time. I'm pretty sure I didn't read the actual book, but its themes were widely discussed and I do recall reading several articles examining it, and the concepts in it, in depth.
All of which is why it was something of a mild shock to realize I'd forgotten all about it.
Engines of Creation talks about technology that mimics a fundamental process in living things, technology so powerful that it seemed quite possible it would enter a runaway feedback loop, amplifying itself without limit, rapidly evolving beyond human control. Even if that could be prevented, the technology had such profound implications that it was bound to have a major impact on all aspects of human activity. It would make previously impossible things routine and change the lives of every human being on the planet in ways we could only hope to anticipate. Best to understand it and get on board, or risk being left behind forever.
If it seems like I'm deliberately using the same kind of language that's now used to talking about AI while avoiding telling you what I'm actually talking about, well, busted.
As you may know from the book title, I'm talking about nanotechnology. Today, the term refers to any technology that operates on a scale below the somewhat arbitrary limit of 100 nanometers, or 0.1 microns, or about a thousandth of the thickness of a human hair, or about the size of many viruses. There have been significant developments in nanotechnology since then, for example in developing antimicrobial materials and stain-resistant fabrics, but what caught the public attention was the idea of a universal assembler, a hypothetical nanomachine that could put individual atoms together in any desired (physically possible) configuration. Somehow.
Since this hypothetical universal assembler is itself an arrangement of atoms, it should be possible for a universal assembler to create copies of itself, and we're off to the races. So as long as the atoms it needed were available, the assembler could rearrange them into more universal assemblers, which in turn could do the same. Exponential growth being what it is, this process would soon produce tons of replicators, and so on up to any quantity you like, assuming, as Drexler says, "the bottle of chemicals hadn't run dry long before".
A couple of years after the book was published, some people at IBM used a scanning tunneling microscope to spell out the letters "IBM" in xenon atoms on a substrate of nickel. How hard could it be then to build up a universal assembler atom by atom?
Drexeler's book actually covered a number of topics, mostly to do with nanotechnology. As part of that, Drexler spent a couple of paragraphs discussing the idea of universal assemblers assembling more universal assemblers. Once you introduce the idea of a universal assembler, you kind of have to talk about that. He called the scenario gray goo, with the explanation that "Though masses of uncontrolled replicators need not be grey or gooey, the term "grey goo" emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass."
In other words, we shouldn't assume that something dangerous would be big and spectacular. It might just as well be an amorphous grey goo made up of very tiny, but still dangerous, little machines.
As I understand it, Drexler wasn't claiming that the gray goo scenario was inevitable. Drexler himself later said that there wasn't any good reason to try to build a universal replicator, and later analyses by others suggested that the actual risk of runaway gray goo is quite small, but it's not hard to see why the idea of gray goo might take off anyway.
Drexler's original scenario involved a "dry" replicator that needed a supply of simple chemicals to work with, but surely something that could assemble atoms at will could also disassemble more complex structures into raw material. This gives us the nightmare scenario of a blob of gray goo that could turn whatever was around it into more gray goo, leaving behind only whatever it didn't need to make more assemblers.
Since elements like carbon, hydrogen and oxygen can combine very flexibly into a wide variety of configurations, it's a good bet that a universal assembler would use them as raw material. Since those are also a the main materials that living things like human beings are made of, there's a certain potential for conflict here. Yes, we contain other elements that might not be useful to the goo, but having a small residue of calcium, phosphorous and such make it through unscathed seems like cold comfort in the larger picture.
Unlike, say, time travel or perpetual motion, the gray goo scenario doesn't violate any known laws of physics. In fact, we know that it's possible for collections of atoms to make copies of themselves. That's life (yeah, sorry).
However, at least as far as life is concerned, we also know that the mechanisms to do this are very complex and hard to predict, much less control. It's an interesting situation, really. There's nothing going on in cell metabolism and cell division that doesn't boil down to a chemical reaction. We know quite a bit about many individual reactions, the structure of cells, processes like DNA replication and RNA transcription and so on. In that sense, there's no mystery.
Nonetheless, molecular biology is full of mysteries that molecular biologists have been struggling for generations to get a handle on. For example, given a DNA sequence coding a protein, it's easy to read off exactly what amino acids that protein will consist of. But a protein isn't a simple sequence of amino acids. It's a three-dimensional structure that interacts with other chemicals in the cell, including but not limited to other proteins.
Exactly how a given sequence of amino acids will fold up into a three-dimensional structure (or structures) and how it will interact with other chemicals is a wide-open topic for research. There have been significant advances in recent years, but it's worth noting that the most successful approach to protein folding so far isn't simulating the physics of how the atoms in the protein will interact.
The current state of the art for protein folding is AlphaFold, a machine-learning model that in some sense is basically going "Meh ... this matches up with this, this and this in the training set, and those folded up this way, so yeah ... it'll probably be something like this." Yes, I'm being very glib here with something that won Nobel prizes (deservedly, I'd say, for whatever my opinion is worth here), but the point is thatthe best approach so far is to give up on understanding what's going on physically and do very sophisticated pattern matching.
All of this is to say that we only know of one workable way for collections of atoms to produce copies of themselves, and there is an incredible amount we don't know about how that actually works. It's worth pointing out that even though life is everywhere in our environment, including places that until recently hardly anyone had the imagination to think it might be, almost all of the Earth is non-living matter -- rocks, magma, ocean water, air and such. In other words, after a few billion years of collections of atoms copying themselves, we do not have a gray goo scenario.
The nightmare gray goo scenario depends on a number of assumptions:
- That universal assemblers are even possible. A true universal assembler would be able to arrange atoms into any physically possible arrangement. Living systems can produce more living systems. They can produce all manner of interesting chemicals and profoundly transform the world around them, but a universal assembler would be able to produce any chemical structure. Living things can assemble collections of particular molecules, such as nucleotides, amino acids, carbohydrates and lipids. There's no microbe that could put a xenon atom in a particular place on a nickel substrate.
- That it would be feasible to build one. The IBM demo, impressive though it was, used a human-scale machine to put a particular kind of atom on a carefully prepared substrate. Xenon is a noble gas, meaning that it's very unreactive. Unlike, say, oxygen, a xenon atom is not going to try to bind to the substrate or whatever else is around while you're maneuvering it into place. The IBM demo arranged 35 atoms on a flat surface. This is a far cry from arranging -- thousands? millions? -- of atoms in a complex three-dimensional structure that can move.
- That we would be able to build an autonomous programmable universal assembler. It would be one thing to have an assembler that could receive instructions from the outside world, on the order of "put this atom here" and then "put that atom there", but a true self-reproducing assembler would have to carry its instructions with it, just as the DNA in a living cell carries the instructions for reproducing the cell.
What convinced anyone that we were anywhere near being able to bootstrap a world of universal assemblers that might eventually consume not just all life on Earth, but potentially all matter that could be consumed? I'm not being rhetorical here. I mean, literally what are the thought processes that led to this idea taking off?
For one thing, feedback loops seem to be catnip for a certain kind of brain, my own included. I spent hours and hours as a kid reading and pondering Gödel, Escher, Bach, contemplating self-reference, strange loops, the use/mention distinction and so on. One fun way to play around with these ideas is to write a Quine, a program which produces itself as its output. Every compugeek should do it from scratch at least once. A key milestone in developing a new language is to write a compiler for the language in the language itself and then use the compiler to compile itself (after compiling it with an earlier compiler written in a language that already exists). In a post on the other blog, I mentioned Doug Engelbart's NLS team using NLS to further develop NLS.
In other words, ideas like machines that can build anything, including themselves, or AI systems that can write any code, including code for better AI systems, come naturally to at least some people. I don't think you have to have any training in computing or mathematical logic to hit on ideas like this, but it helps (and on the other hand if you're not already a compugeek it could also be a sign that you might enjoy learning more about computing).
The idea of a self-reinforcing feedback loop is compelling and cool, cool enough that it's very easy to get caught up in the implications and brush aside the hidden assumptions that inevitably pile up along the way.
I think there's also another factor at play.
In 1791, Luigi Galvani published his findings about animals and electricity, including the discovery that applying an electric shock to the nerves of a dead frog would cause the legs to move. Alessandro Volta developed an electric battery a few years later, partly in order to demonstrate that electricity could be created by a chemical reaction, as opposed to it being a "vital force" created specifically by living things, but the idea of Galvanism, as Volta himself called it, continued to be widely discussed.
In 1816, while on holiday in Geneva, Mary Wollstonecraft, her soon-to-be husband Percy Shelley, Lord Byron and John Polidori held a contest to see who could write the best horror story (as one does). Wollstonecraft won that one with Frankenstein, or The Modern Prometheus. As the subtitle makes clear, one main theme of the story is humanity dealing with forces it little understands, in this case electricity. Just as Prometheus stole fire from the gods to bring to humanity and paid dearly for it, so Dr. Frankenstein uses electricity to bring power over life itself, and pays dearly.
Today it may seem silly to think that you could reanimate dead flesh just by shocking the bejeezus out of it, but was this really any more outlandish than thinking that if you just put the right atoms together in the right arrangement you could create something that could reproduce itself without limit? Yes, Volta had demonstrated that you could produce electricity from non-living matter, but if all animals produced electricity as well, surely there was something about electricity that was essential to animal life.
When faced with something new that touches on fundamentals like life, matter or thought, it's sensible to consider the implications. When considering something so fundamental, it's natural to see at least the potential for world-shattering changes and even to feel some measure of awe.
Just as Drexel wasn't so much predicting the advent of gray goo as trying to understand the implications, Wollstonecraft wasn't predicting armies of reanimated corpses, but discussing the implications of our ability to create new technologies outrunning our ability to control them. This being the Romantic period, she wasn't alone.
These are worthwhile questions to investigate. What if we learn the secrets of bringing the dead back to life? What if we can create tiny devices to arrange matter in any form we like, including the form of those devices themselves? What if we create machines that are more intelligent than us, and those machines figure out how to make more machines that are even more intelligent?
But as we discuss the implications of a new technology, it's important not to lose track of how things would actually happen. It's fine to brush the details aside in a discussion of what's possible. How can you discuss the implications of a universal assembler without assuming that universal assemblers are possible, one way or another? But when the discussion turns back to what do we do now, in the world we actually live in, the assumptions previously brushed aside have to come back into the conversation.
No comments:
Post a Comment