Chapter 10 from: Ed Regis, Who's Got Einstein's Office?, Addison-Wesley, 1987.
When Stephen Wolfram came to the Institute at the advanced age of twenty-three, he was put into a first-floor corner office in the astrophysicists' building. Wolfram didn't really belong there, though, because he wasn't an astrophysicist. But he didn't belong with the particle physicists, either, because he also wasn't a particle physicist. Stephen Wolfram was in a new category altogether, one for which there was as yet no name.
Later, when the Institute gave him a whole suite of offices, to accommodate himself, his staff, and their combined computer gear, there was still no name for the type of physics they were doing, although for a while they thought of themselves as the dynamical systems group. The reason there was no name for what they were doing is that the field didn't exist yet: no one had ever done it before.
Most scientists restrict themselves to one narrow subject matter to globular clusters, for example, or solar neutrinos, or fruit flies--but Wolfram had a far grander goal in view. He wanted to explain not the complexity of any given phenomenon, but complexity itself, wherever it might be found, whether in the structure of galaxies, or in turbulent fluids, or in the nucleotide sequences of a DNA molecule. He wanted to understand complexity, what's more, not in terms of the usual vehicle of mainstream physics, which is to say the differential equation, but in terms of something that was essentially new in science, the abstract, pattern-generating mechanisms known as cellular automata.
Cellular automata are not real things, they're only abstractions, creatures of the intellect. But they're big with Wolfram and his cohorts because it turns out that, when these imaginary mechanisms are simulated by a computer, they replicate the operations of physical systems that are actually found in nature. This is a bit uncanny. It's as if someone wrote a novel--an utter fiction--and then discovered that everything in the novel had actually happened.
There was the time Wolfram produced the seashell pattern, for example. He was working with a simple cellular automaton--the computer program for it was utterly innocuous, just a few lines long--and this diamond-shaped pattern shows up on the screen. It reminded him of some mollusk shells that he had once seen in a marine biology catalog. So he went back and paged through the catalog, and sure enough, there it was. He put the picture from the catalog next to the picture on the computer screen, and there was just no doubting it: his cellular automata simulation, the little self-repeating formula that he'd typed into the computer, it had produced exactly the pigmentation patterns found on the seashell. Hard to believe, but the proof was right there in front of him:
Later he discovered that cellular automata not only produced the patterns found on seashells, they also simulated the structure of snowflakes, the growth of crystals, the meandering of rivers, and any one of a dozen other things. It was incredible. In went a few lines of computer code, out came the real world, as if by magic. Cellular automata, Wolfram decided, might be able to explain the very architecture of nature.
Could it be, he wondered, that nature itself is in some sense a gigantic cellular automaton? Could it be, in other words, that the universe as a whole is a vast...natural computer?
Stephen Wolfram was born in 1959, in London, to a mother who was an Oxford philosophy professor and a father who was part-time import/export businessman and part-time novelist. Wolfram wasn't much interested in any of that, but it was clear right away that he had some special talents of his own and an unusual view of his place in the world. He went to Eton, where he played cricket--or at least he showed up on the cricket field. "I learned the best positions on the field so that I could read a book during the match," Wolfram says in his British accent. "I had to play cricket. I wouldn't have played it by choice." Later, at seventeen, he attended Oxford University--or at least he showed up on the campus. "I had the good fortune never to have to go to courses or anything like that, because I learned what I needed to know from books. From everything I've seen, courses are just a waste of time and one can learn most things a lot more quickly just by reading about them."
This isn't just idle boasting, because by the time he was fifteen, two years before enrolling at Oxford, Wolfram had written--and published--his first scientific paper, about a problem in particle physics. He never liked the article very much, though. "It wasn't very interesting," Wolfram says. "It's really lousy. I don't even have a copy of it anymore."
He has a copy of his second paper, though, which he wrote "much later," as he puts it, "when I was sixteen." He's saved a copy of this one, and he can even find it when challenged: "Neutral Weak Interactions in Particle Decays," published in Nuclear Physics in 1976.
In 1978, Wolfram came to the United States, to Caltech, where he was invited by physicist Murray Gell-Mann. He got his Ph.D. degree in theoretical physics a year later, when he had just turned twenty. "I actually missed out on being a teenage Ph.D.," he says, a little regretfully. Not long afterward, Wolfram received one of those MacArthur Foundation "genius" grants, the youngest person ever to be awarded one. You don't apply for these grants: the procedure is that you just get a phone call out of the blue one day to learn that you've won a tidy, tax-free sum every year for the next five years, and that with this money you can do whatever you want. Wolfram got $125,000; he chose to continue on with his research.
At the time, Wolfram's interests were divided between particle physics and cosmology. He was particularly interested in the evolution of the early universe, and he decided to work on the problem of galaxy formation. In order to do some calculations he found that it would be helpful to have a computer language that could handle algebraic expressions--abstract formulas--instead of just numbers. There was no computer language in existence that was really good at this, and so he decided to invent one of his own. Together with a few collaborators at Caltech--Chris Cole, Tim Shaw, and others--Wolfram created a language that would do algebra. "Instead of just telling one that 2 + 3 is 5, for example, the program could tell one that (x + 1)^2 when expanded out is (x^2 + 2x + 1). In other words, it could deal with symbols as well as with numbers." Wolfram named the language SMP, for Symbolic Manipulation Program.
It turned out that a symbol-juggling computer language had wide applications not only in theoretical physics but also in engineering and other branches of applied science. Wolfram saw no reason not to market the product commercially, and so he licensed SMP sales rights to a software firm called the Inference Corporation, of Los Angeles. This displeased Caltech, however, which claimed that it owned the language, since it had been developed on its premises and by its employees. Caltech and Wolfram settled their differences out of court, but Wolfram ended up quitting anyway in order to join the Institute for Advanced Study. At the Institute, they had a reputation for leaving you alone, something that was powerfully appealing to Stephen Wolfram.
It was during his dispute with Caltech that Wolfram first became interested in cellular automata theory. He was finding that, in order to derive the structure of galaxies from the fireball of the Big Bang, you needed some sort of pattern-generating mechanism. Cellular automata, it turns out, excel at the task of generating patterns.
"If you think about the thermodynamics of the early universe," Wolfram says, "you get into a strange problem. The universe is supposed to have started out as this uniform ball of hot gas, but in the end what we see is a lot of galaxies that are very patchy and irregular. The question is how do you get one from the other? Standard statistical mechanics says that you can't, and so I got interested in systems where you could start off with something which is completely random and completely uniform and end up with something which is kind of patchy and not uniform, and which might have some complicated structure to it."
At its most fundamental level, the problem involved goes back to the birth of philosophy, back at least to Plato. The question is how to get order out of disorder, complexity from simplicity. How from the chaos of the Big Bang do we wind up with structures as intricate as the chambered nautilus, the human eye or inner ear, the foundation of life itself in the bafflingly complex structures of DNA? It's the same problem that underlies the debate between science and creationism: You can't get something out of nothing, you can't get fantastically complex orderliness out of utter and irreducible chaos. To get what you actually find in the world, say the creationists, you have to suppose that God himself created it.
Wolfram, being a scientist, was intent on explaining order without reference to divine miracles. But if God didn't impose the order we find in nature, and if that order wasn't always there to begin with, then it follows that the universe must be somehow self-organizing. It must have created its own order. But how? What was the mechanism behind it?
Simultaneously with this, Wolfram was working on the wholly different question of how to get minds from machines. "From the other side I got interested in problems about artificial intelligence," Wolfram says. "I realized that if you want to make things really work in artificial intelligence it's no good just to have a computer with a single central processing unit, you have to have a computer that can process lots of information in parallel, and so I got sort of interested in what were the simplest parallel processing computers. So I was doing these two things--on the one hand trying to make a simple model for self-organizing systems, and on the other hand trying to understand simple models for parallel computers."
The one thing that both these problems had in common was that they required a way of getting complexity from simplicity: complex galactic structure from an original uniformity, and complex computing abilities from elementary components. So Wolfram took it upon himself to figure out how you could systematically generate complexity from simplicity.
He knew from the general concept of recursiveness in mathematics--the procedure of defining something in terms of simpler versions of itself--that complicated structures could arise from simple beginnings through the repeated iteration of one or more rules, as happens, for example, in the game called "Life."
Life was invented in 1970 by Cambridge University mathematician John Conway. The game is played out on a vast cellular space, a two-dimensional plane divided up into "cells," such as those on graph paper or on a checkerboard. Each cell has eight neighbors, four at right angles, and four more at the corners. Cells can be either on ("alive") or off ("dead"). If a cell is on, it's filled in with a marker of some type; if it's off, it's left blank.
The general principle behind the game is that life or death is a function of one's neighbors: isolated cells die of loneliness, while cells that are too crowded die of overpopulation. When neither of these extremes obtains, then live cells will remain alive, and when conditions are just exactly right then a live birth will occur. Just like in real life.
It all boils down to just two rules:
Those are all the rules.
Say, for example, that you start off with just two live cells, one right next to the other:
Now this is a sudden-death situation, because these poor fellows are too lonely to go on living. In the next generation both those cells will be off, and their squares will be blank.
But if you had begun with four live cells in a square array,
then everyone would be satisfied with life and they'd continue to live in the next generation, the reason being that having three live neighbors is the happy medium.
And if the blessed situation should obtain that a dead cell has exactly three live neighbors,
then a birth would occur:
Now one might think that from rules this simpleminded nothing interesting could possibly occur. But one would be wrong. Some starting patterns are like good genes: they are fruitful, and they multiply, sometimes in surprising ways. Take, for example, the T-shaped pattern called the "T tetromino":
In the very next generation (step 1 below), three births have occurred. In the generation after that, the shape breaks up, as if it were undergoing cell division, and then, births, deaths,... and ordered patterns emerge:
Turn the T tetromino clockwise 90 degrees, and add a single live cell on the upper right-hand corner, and you've got an "R pentomino":
The R pentomino is incredibly prolific. After sixty generations (moves), it has exploded into a microcosmos (see next page).
Life's evolving patterns--"Life-forms"--are basic examples of cellular automata. They're cellular insofar as they exist in the squares--or cells--of a checkerboard-like grid. They're automata in the sense that they develop of their own accord--"automatically"--from repeated applications of the same two rules. In other words, Life-forms are not interactive: they require no human guidance or control for their growth and development. Given an initial configuration of live cells, the entire future history of that Life-form is cast in stone for all eternity. The process is deterministic in the highest degree: from the same starting pattern you will always end up with the same results, no matter how many times you play the game, and no matter whether you make three moves or three million.
This, in fact, is what's so intriguing about cellular automata: they seem to grow spontaneously and unpredictably, but their behavior is as rule-governed and preordained as the most ironclad laws of physics.
The game of Life, originally set on a checkerboard, was tailor-made for the computer, and so hackers programmed it into their machines and watched the results. They found it so mesmerizing that, when Martin Gardner described it in his "Mathematical Games" column in Scientific American, professional computer users went on a Life binge. "My 1970 column on Conway's Life met with such an enthusiastic response among computer hackers around the world," Gardner said later, "that their mania for exploring Life-forms was estimated to have cost the nation milions of dollars in illicit computer time." For a while, Life addicts even published a journal devoted to the game, called Lifeline.
Hackers at M.I.T.'s Artificial Intelligence Laboratory--people like Bill Gosper, Ed Fredkin, and others--became so fascinated with it that they began to wonder if they weren't watching something which was more than just a game, whether they weren't in fact beholding some secret electronics-circuitry version of real life. The patterns on the computer screen seemed to act so realistically, the way they blossomed and died, spawned and merged--it was as if they were living bits of a digital soup. John Conway, the game's creator, took Life so seriously that he imagined Life-forms might actually be living entities. "It is no doubt true," he said, "that on a large enough scale Life would generate living configurations. Genuinely living, evolving, reproducing, squabbling over territory. On a large enough board, there's no doubt in my mind that this sort of thing would happen."
Ed Fredkin, for his part, allowed as how after all there was no way to definitively prove that we ourselves--living human beings--are not Life-forms in a game run by some metaphysical super-hacker.
Back in California, Stephen Wolfram had not yet met up with cellular automata theory, but he had heard about the computer game called Life. He was friends with Bill Gosper, who used to tell him all sorts of Life stories, and, although Wolfram was too level-headed a chap to fall for Conway's and Fredkin's far-out scenarios about truly living Life-forms, he found the game quite amusing. After all, these strange Life entities were a little bit like the computer models that Wolfram was just then constructing on his own, to help him with the problem of galaxy formation. The main thing wrong with Life, so far as he was concerned, was that it had just one set of rules. What he needed was a way of studying the structures that would arise from many different types of such rules, and he needed a way of studying them all systematically, mathematically, not just as part of a computer game craze that was very big in hacker heaven, which is to say, at M.I.T.
"It was sort of amusing when I was thinking about these models of mine for a month or so," Wolfram says, "and then I happened to have dinner with some people from M.I.T., from the Lab for Computer Science, and I was saying that I got interested in these structures and I was telling everybody about them... and somebody said,'Oh yeah, those things have been studied a bit in computer science, they're called cellular automata.' And I said that I've heard of those but that I don't know anything about them. Then I went off and looked them up and found all these papers and books about cellular automata."
Wolfram, though, was not impressed. "I was a little bit disappointed, actually. If you look up cellular automata on one of these computer searching things you'll find that there had been about a hundred papers written about them by 1981 or something, and so I went and looked up a whole bunch of these things, but they were boring. They were so boring! They were an illustration of a sad fact about science, which is that if someone comes up with an original idea, then there will be fifty papers following up on the most boring possible application of the idea, trying to improve on little pieces of details that are completely irrelevant."
Wolfram hates things that are boring. He's always done a lot of traveling, from one scientific conference to another, and one time he got the bright idea that he'd learn to fly. So while he was at the Institute he'd go down to Mercer County Airport, south of Princeton, and get into a Beechcraft Skipper, a small, one-engine training aircraft, and teach himself practical aerodynamics while his instructor more or less helplessly looked on from the right seat. That was exciting for a while... until the day Wolfram took off on a solo cross-country flight and got stuck in some faraway airport because the weather turned sour and he couldn't fly back. That was boring. And that was the end of Stephen Wolfram's piloting career.
Later, Wolfram traced cellular automata back to their origins in John von Neumann, who had showed how a vast arrangement of them could reproduce themselves in cellular space.
"Von Neumann had done something: he came up with the original idea," Wolfram says. "The idea was interesting, but the details, the construction he had made, it was completely boring. It's this book full of the design drawings of this completely weird object. The details of its implementation are like the most arcane mathematical proof one's ever seen. I don't know of any scientific thing that one learns from all those complicated details. I mean it's an interesting tour de force, it's an impressive proof--what he was trying to prove was that self-reproduction was possible, and he succeeded in proving that--but the method of proof was thoroughly arcane and complicated and I think not very illuminating as such."
But within a few months after his baptism into the world of cellular automata, Wolfram found himself at a scientific meeting on a small, privately owned island in the Caribbean. Here, in the shade of palm trees, and with the soft sea breezes wafting over him, Wolfram came face to face with a cellular automaton machine. This was not boring. This was interesting. "It was love at first sight" says Tom Toffoli, who watched Wolfram at the display screen.
The meeting had been arranged by Ed Fredkin, who, although he had never finished college, was on the M.I.T. faculty. Fredkin had made a fortune from a computer graphics and digital troubleshooting firm that he had founded called Information International Incorporated. By the time he sold it, Fredkin had made enough money to buy an entire Caribbean island. It was only a small spit of land, about three-quarters of a mile long and a half-mile wide--and called, quite appropriately "Moskito"--but it included a complete resort, Drake's Anchorage, with cabins and meeting rooms and one of the best restaurants in the whole British Virgin Islands. The place was, as M.I.T. grad student Norman Margolus said, "useful to own if you wanted to get people to come to a meeting."
The meeting, which was held in January, 1982, had grown out of an earlier conference on physics and computation theory held at M.I.T. the year before. It had been organized by the M.I.T. Information Mechanics Group, consisting of Fredkin, Norman Margolus, Tom Toffoli, and Gerard Vichniac. Scientists--not only computer scientists, but also physicists and mathematicians--were belatedly coming to realize that the computer was more than just a tool for doing number crunching, that in fact it seemed to mimic the world's processes in some hitherto inscrutable way.
About a dozen people came to Moskito Island, from M.I.T., IBM, the Argonne National Laboratory, and of course Caltech, from which came Stephen Wolfram. Sitting there at the computer watching the cellular automata march down the screen in waves of accretion, Wolfram began to realize their true possibilities. These things could produce whole ranges of textures, whole mini-universes. Sometimes, it's true, the patterns died out almost before they had started: not just any old set of initial conditions, apparently, could produce a universe. Other times the patterns started out chaotically but ended up producing a dull, repetitive order:
Still other times the patterns started out in an orderly way, and then degenerated systematically into nothingness, or almost:
The fascinating part of it all was that there was no way to tell, prior to actually running the program, what the final result would be. To find out what happened, you had to get these cellular automata to work. They were mysterious, even a little eerie. You specified initial conditions, and you specified their rules of development, then you just cranked them into the computer and let it run, and a couple of seconds later--presto!--you had your own personal cosmos right there in front of you.
Some light was shed on the underlying principles by Charles Bennet, of IBM, who was also on Moskito Island for the conference. Wolfram talked to him about the theory of machine computation and cellular automata, and he came away with an even more expanded picture of how they worked and what they could do. Not only did they reproduce some of the physical structure of nature, they also seemed to illuminate the ways in which machines, and perhaps even people, processed information. It was as if cellular automata were suddenly the key to both matter and mind.
Wolfram was never quite the same afterward. "From that point on," Tom Toffoli says, "Wolfram's bibliography, his list of scientific production, goes from no cellular automata at all to 100 percent cellular automata. He decided that cellular automata can do anything. From that moment on Stephen Wolfram became the Saint Paul of cellular automata."
"That was in January '82," Wolfram says. "Between I guess February '82 and June '82 I got seriously started working on cellular automata. This was when I was just leaving Caltech, and in fact my two activities those months were speaking with lawyers, trying to un-mess-up this situation at Caltech, and working on cellular automata. In fact I think that probably for maximal personal happiness I should probably have spent more time at the lawyers and have made the science take a little bit longer and just done it a little bit less hard. As it was, I actually spent almost all of my time doing science. "
A few months later, in December of 1982, Stephen Wolfram drove his red Volkswagen Rabbit to Princeton and set up shop in room 107 of the Institute's building E, which he turned into a cellular automata factory. He'd come in during the afternoon, program automata into his computers, and sit back and study his toy universes. Wolfram had three terminals in his office, two of them self-contained units, one of them connected up to the VAX in the basement. Sometimes all three would be running at once, sifting through patterns, searching through automata, trying to make some sense of them, to classify them according to complexity, longevity, and so on. Wolfram looked at hundreds, thousands, even tens of thousands of cellular automata patterns, often staying up late into the night, his computer display screens bathing the room with a dim blue glow.
Like the simple structures in the game of Life, every cellular automaton is a function of two things: an initial configuration of cells, and rules for producing new configurations out of previous ones. Wolfram talks about "sites" instead of "cells," and "time steps" instead of "moves," but otherwise the principles involved are generally the same as those in the game of Life.
"You take a line of sites," Wolfram says, "and each site has a value of zero or one, and the thing evolves in discrete time steps. After one time step you have a new line of sites. The value of a particular site depends on its own previous value, and the value of a couple of neighboring sites in the previous time step."
So you start with a single line of sites and then ask what's the value of the sites in the line below:
The value of a given site is determined by the values of sites in the previous line according to a rule. Wolfram states these rules mathematically, but most of them are much simpler than they look. One of the simplest cellular automata, for example, grows out of the rule:
aj is the initial site,
aj-1 is the site to its left,
aj+1 is the site to its right,
t is time, and
mod 2 indicates that the sum of the two site values is to be reduced modulo 2.
"All this means," Wolfram explains, "is that the value of each site is the sum modulo 2 of the values of its two nearest neighbors on the previous time step. In other words, the value of a given site at the next time step (t + l) is equal to the sum of the values at time t of aj-l, which is the site to its left, and the value of aj+l, which is the site to its right, reduced mod 2."
Mod 2. This refers to the sum of two numbers using modular arithmetic on the base 2. We already use modular arithmetic all the time, most of us without ever realizing it. "Clock arithmetic," for example, is addition mod 12: when you go into work at 9 o'clock and then put in 8 hours work, the resulting sum (9 + 8) is not 17 o'clock, but 5 o'clock. In modular arithmetic, the only numbers allowed are those equal to or less than the base involved. Addition mod 2, therefore, works according to the table:
0 + 0 = 0
1+ 0 = 1
1 + 1 = 0
In plain English, what the mathematical rule above means is that, if sites aj-l and aj+l have different values, then the new site will have the value 1; if the two sites have the same value, then the new site will have the value 0. In this case, the value of the new site is 0:
To find the value of the next new site, you apply exactly the same rule, applying it this time to the left- and right-hand neighbors of the site above it. You take the sum of those two values and reduce them mod 2. Since (0 + 1) mod 2 is 1, the value of the new site is 1. If you apply the same rule again and again to many new sites, a pattern begins to emerge:
The automaton grows in this way through many additional time steps, ultimately producing a distinctly complex structure. After 23 time steps, it looks like Figure 8:
And after about 100 time steps, it looks like this:
"What this shows," Wolfram says, "is that if you start off with just one nonzero site, the thing spontaneously grows into this kind of pattern. Even though the rule that formed it is very simple, the pattern it generates is relatively complicated."
The structure is recognizable to a student of mathematics as Pascal's triangle of binomial coefficients, but a biologist might see in it the pigmentation patterns of a snakeskin. "In fact," Wolfram says, "there are examples all over physics and biology of systems that look like that, that grow in exactly that way: crystal growth, for example, cell growth in embryos, the organization of cells in the brain, and so on. The important thing is that the mathematical features of cellular automata are the same mathematical features that are giving rise to complexity in a lot of the world's physical systems."
Cellular automata, or something like them, might be at work even in our own DNA.
"DNA is a very succinct program for how to build a creature," Wolfram says. "The number of bits in a human DNA molecule is equal to the number of bits on the larger disk drives you can buy today for computers, and it's awfully surprising that you can build something as complicated as a person just from the information that's in a medium-sized book. So clearly there's a very clever kind of programming that nature has done there, and in some sense what's going on in the development of cellular automata may be a bit like that.
"For example," Wolfram continues, "you might look at those seashell patterns and ask, 'How can such a complicated pattern be encoded in its DNA?' But if it's indeed the case--which one doesn't really know yet--that those patterns are generated by the simple rules that generate cellular automata, well, then it's quite easy to see how DNA could encode such simple rules."
In the fall of 1984 Stephen Wolfram moved into new offices on the third floor of Fuld Hall. The Institute had created a suite of rooms for him and for the new staff he'd brought in, which included Norman Packard and Robert Shaw, two theoretical physicists from the University of California at Santa Cruz. Both had strong interests in complex systems theory and dynamical systems, which were now emerging as important new branches of physics.
Their aerie has the feel of an artist's loft, an impression created by the skylights, which have been built into the long, sloping roof overhead. They let in a subdued, diffuse luminescence that seems to come from nowhere, just the right type of light to prevent reflections in the computer screens. The loft is full of computers: there's an IBM AT, a Nova, a Ridge 32, plus three or four Sun Microsystems workstations. Wolfram is dependent upon computers for his cellular automata work, but all the same he doesn't much like programming. Programming is boring.
"I do an incredible amount of computer programming," he says, "but I don't particularly like it. In fact, just this weekend I wrote about a couple of thousand lines of code...and a lot of it was to do such wonderful things as to make my laser printer print out these pretty pictures. What took the longest time was getting the caption to come out right. Ugh."
Computers simulate the operations of cellular automata, but the weirdest possibility of all is that the reverse may be true: cellular automata may actually be computers.
"Some of these automata are quite strange and complicated," Wolfram says, "and I have a kind of strange speculation about them, which is that they could be used as universal computers. Given an appropriate initial state, perhaps you could encode in a program and data in such a way that the automaton itself would emulate the operations of a general digital computer. In other words, there would exist some initial state of the automaton that would make it behave like a computing machine."
Wolfram's idea draws back to the game of Life, to John Conway and Bill Gosper. After inventing the game and watching his Life-forms grow, Conway wondered whether there was any finite configuration of Life cells that would grow without limit, ballooning endlessly until it filled the entire Life universe. He suspected there wasn't, and offered a $50 prize to anyone who could find a perpetually proliferating Life structure. Martin Gardner announced the challenge in his "Mathematical Games" for October 1970. Barely a month later, Bill Gosper collected the money. He had discovered the "glider gun."
The glider gun is a configuration of Life cells that spews out new cells on a regular basis. It's as if spontaneous creation were at work, for the new cells--"gliders"--just keep coming out and traveling away of their own accord, like bullets from a machine gun. A glider gun in good working order could transform a modest-sized initial configuration of cells into a teeming Life universe (see Figure 9).
Figure 9 Bill Gosper's gilder gun spewing out gliders
Because Life is a deterministic game that proceeds by deterministic rules, the emitted gliders behave predictably. If they meet no other Life-forms, they continue on forever. If they collide with another glider, then, depending on the angle of collision, they'll either annihilate each other or form other structures of known kinds, even including new gliders. This predictability, combined with the discrete character of Life-forms, led Conway to suggest that Life patterns could function as universal computers.
A digital computer, after all, is essentially a device that reduces all in formation to binary states--such as on and off, zero and one--and to simple functions such as AND, OR, and NOT. Conway realized that the right combination of glider guns and gliders could do everything that computers do. A stream of gliders and the spaces between them could stand for a succession of binary digits, ones and zeroes. Glider collisions could be logical gates, with the incoming glider stream being the input data, and the debris resulting from the collision being the output. If five gliders collide with four gliders and twenty gliders come out as the resulting debris, then you've got multiplication occurring. If ten collide with two and five come out, then you've got division. In principle, there was absolutely nothing that the correct combination of glider guns, gliders, and other Life-forms couldn't accomplish. Memory would be provided for by configurations of different Life-forms across a vast patchwork of cellular spaces. Conway even went so far as to prove, mathematically, that the game of Life was logically sufficient to encompass all the functions of a universal digital computer.
Although Conway proved that the game of Life was capable of universal computation in a theoretical sense, he never actually put together an array of Life-forms that worked like a computer. Cellular automata are more varied and powerful structures than Life-forms, and so Wolfram thought that they ought to be capable of universal computation. He therefore spent some time looking for the cellular automaton that would compute.
He wasn't successful, and so he followed in the footsteps of Conway. Conway had broadcast his challenge to the world through Scientific American, and so too would Stephen Wolfram. In the May 1985 issue, "Computer Recreations" columnist A.K. Dewdney wrote: "Wolfram suggests that lurking among [cellular automata] are true computers, vast linear arrays of cells blinking from state to state and churning out any calculation a three-dimensional computer is capable of. Wolfram, currently searching through the myriad of one-dimensional cellular automata, is not above enlisting the help of amateurs in this sophisticated and daring enterprise."
It only took a month for Bill Gosper to solve Conway's problem, but to date nobody has ever solved Wolfram's, notwithstanding the flood of cellular automata structures that soon poured into his Fuld Hall offices. A cellular automata that computes, evidently, is a far more complex and elusive structure than a Life-form that merely proliferates ad infinitum.
Wolfram, nevertheless, is more than ever convinced that automata theory is important for the development of advanced computers, especially parallel processing machines. Conventional digital computers are built on the principle of "serial architecture," meaning that all operations are executed in the same sequence, performed one after the other. Because electrons stream through computer chips at almost the speed of light, serial architecture has been fast enough for most applications, but there's a fundamental way in which such a structure is at odds with the world at large, for nature works not serially, but in parallel. Out there in the real world, things happen all together and at once, as nature updates all its entities simultaneously from second to second.
The planets of the solar system, for example, affect one another through their gravitational fields, but they all exert their influence simultaneously. It isn't the case that Mercury's gravitational field influences Venus and then Venus's influences Earth's. Rather, all the planets affect each other in unison. For a conventional serial computer to simulate such planetary dynamics, it would have to do everything sequentially, calculating the first planet's effect upon the second, then the second upon the third, and so on. A parallel processing computer, by contrast, would more closely approximate what happens out there in the real world. If each planet were represented by a separate processor, then all processors could work at once and each could "feel" the gravitational influence of all the others simultaneously.
The parallel processing computer would not only reach its answers more quickly, its very principles of operation, its very structure, would approximate that of the external system it was attempting to emulate. If, for example, a computer had as many individual processors as there were cells in the living organism whose behavior it was trying to mimic, then each chip could stand for a single biological cell and the computer itself would in a sense "be" the whole biological organism. The computer's information processing would in that case reflect the structure, form, and evolution of the organism with a kind of fidelity that's impossible to achieve with ordinary, serial-architecture computers.
For Wolfram, the intriguing thing about cellular automata is that they have every sign of being natural abstract models both of physical phenomena in all their parallelism and of the parallel processing computers now being developed. It's as if cellular automata were the analog to both nature itself and the mind that beholds it. Both nature and computers are mechanisms that process information: nature proceeds from a set of initial conditions to a set of ultimate conditions, whereas computers proceed from input to output. Nature operates according to physical laws, whereas computers operate according to the instructions contained in their programs. But cellular automata are models of both these processes, the initial positions of a cellular automaton corresponding equally well to the initial conditions in nature and to the initial data in a computer. The evolution of automata, furthermore, corresponds to the operations of a computer and to the evolution of nature itself. And finally, the cellular automaton's rule of development corresponds to natural law on the one hand and to the computer's program on the other.
Nature itself, in other words, may be one vast computer, and mechanisms similar to cellular automata may contain its programming. Cellular automata, the software of the universe.
Stephen Wolfram's principal activity is abstract scientific research. "If I wanted to go out and make a bunch of money," he says in his Institute office, "I wouldn't be here doing research. I'm more interested in research than in making money."
Making money, though, is Wolfram's hobby. "Some people make furniture and sell it as their hobby," he says. "I develop practical applications of computer science and sell that."
First, there were the cellular automata postcards, six different automata in full living color, each described on the back of the card: "The color of each cell is determined by a simple mathematical rule from the colors of neighboring cells on the line above it." On the bottom of the card there's a statement of the mathematical rule that generated the pattern: "Rule 522809355 = 20323143444105," and then a copyright notice, "(c)1984 Stephen Wolfram." On the face is a picture of a cellular automaton, the colors corresponding to the values of each site. The cards span the whole gamut of the cellular automata universe, including the cellular snowflake.
Wolfram's colleague Norman Packard generated a cellular snowflake from a hexagonal rule ("Hexagonal rule 42 = 1010102," the postcard says), and the result, in blue, red, and purple against a black background, is surreal. When Wolfram wrote an article for Scientific American ("Computer Software in Science and Mathematics," in the September 1984 issue), the full-color snowflake was given almost a page by itself. In the same month, Omni magazine ran a story hailing Wolfram as "the new Einstein." A month later, in October of the same year, Nature featured seven full-color pictures of cellular automata on its cover, and, on the inside, ran an article by Wolfram entitled "Cellular Automata as Models of Complexity." Stephen Wolfram, it seemed, had arrived.
Wolfram had a circular printed up describing his postcards--as well as the much larger cellular automata posters that he was also thinking of selling--and he'd distribute the circulars at conferences, meetings, and the like. Some of the Institute regulars were a bit taken aback at the prospect of their otherworldly Platonic Heaven being turned into a mail-order computer graphics distribution warehouse, but Wolfram certainly didn't see it this way. He wasn't going to make any money selling postcards: they cost almost as much to make as he was selling them for. Selling postcards was just fun.
But they were only the beginning. There were the other cellular automata ideas, for wallpaper patterns, murals, and so on, the point here being not so much to make money as to bring cellular automata to the world.
"One of the things I've been meaning to do is to make a bit more of a serious effort to use cellular automata in some kind of computer art," Wolfram says. "In fact I have a little project which is going to happen sometime soon...well, there's this group at M.I.T. that's built a computer-controlled spray-painting machine. It produces 14 by 48-feet images--I guess it takes about twelve hours. It's in a warehouse around Cambridge, and in the next couple of months we're going to make some huge murals of computer art. Then there are some amusing ideas, such as creating a huge cellular automata display which one can put on the side of a building. Unfortunately, this turns out not to be technologically feasible for a feasible amount of money."
By the spring of 1986, word of cellular automata had gotten out of the closed circle of scientists and reached into the art world. "A month or so ago," Wolfram says, "I got this letter inviting me to an art exhibition in New York City, art based on cellular automata." It was at the Cash-Newhouse Gallery, in Greenwich Village. Wolfram went to the show. "It was kind of interesting, actually. I had expected something rather boring, but in fact the pictures were quite nice."
No matter how much freedom the Institute for Advanced Study gave its members, the fact remained that the Institute was not the best place in the world to be if you were at all commercially minded about science. Wolfram had some ideas about how to program a massive parallel processing computer, and so he worked for a time at Thinking Machines, of Cambridge, Massachusetts, which was developing a computer called "The Connection Machine." Largely the brainchild of Danny Hillis, the connection machine was planned ultimately to have one million processors linked up in parallel, making it more like a genuine biological brain than any computer yet invented. For a while Wolfram's scholarly papers gave two affiliations, the Institute for Advanced Study and the Thinking Machines Corporation, sometimes listing the latter of these first. Nothing exactly wrong with this, of course, but it was gradually becoming clear to Wolfram that he might be better off, might have a greater sense of freedom to do as he wished, if he left the Institute. It would be best, in fact, if he had an institute of his own.
In September, 1985, Wolfram drafted a two-page paper with the title, "Plans for an Institute for Complexity Research." The document described an institute staffed with a dozen senior scientists and another dozen post-doctoral fellows, plus technical and administrative support personnel, all of them devoted to fundamental research in the theory of complexity. As he envisioned it, the institute would engage in the commercialization of any ideas that had concrete applications.
"Fundamental research would be the primary purpose of the Institute for Complexity Research," Wolfram wrote. "Of greatest ultimate significance will probably be the development of very general principles of complexity. But in working toward such principles, it is essential to maintain contact with real applications that address existing problems. It will often be worthwhile to carry such applications through to the point of contact with practical technology. This Institute [Wolfram's own institute, not the Institute for Advanced Study] could sometimes, therefore, become involved with technological development, possibly in association with outside laboratories or corporations. Major new technologies might well lead to the formation of 'spin-off' companies."
The new institute, as Wolfram conceived it, would be attached to a major university, so as to have contact with faculty members in many other areas of science, as well as a supply of high-quality students. It would be funded either by a major donation, to be used as an endowment, or by grants from a university, corporations, government agencies, or any combination of these. And flowing from all this activity would be first-class scientific research, as well as major technological innovations that would return income to its supporters as well as to its staff. Wolfram had already done some work on an unbreakable cryptological system based on cellular automata that produce random patterns, and there was much more in the offing.
By the spring of 1986, Wolfram was in contact with some twenty universities that had an interest in housing his new institute. The only question was where to put it. "Some of the best opportunities, in terms of funding and so on," Wolfram said at the time, "are in the Midwest, but who wants to go there? People will come to San Francisco or to Cambridge, they're nice places to live. But the Midwest?"
In the fall of 1986, though, Wolfram quit the Institute for Advanced Study and moved to Champaign, Illinois. He now had his own institute, the Center for Complex Systems Research, at the University of Illinois. He brought with him his whole Princeton dynamical systems group, Norman Packard, Robert Shaw, Gerald Tesauro, as well as a few others, and today they're all housed in a modernistic low-slung brick building on the Illinois campus amid fifteen Sun workstations, assorted laser printers, and all the rest of the trappings of a small complex systems empire. They've also got a data link to a Cray XMP, one of the world's biggest and fastest supercomputers, which is housed next door, at the National Center for Supercomputing Applications. All things considered, Stephen Wolfram did not make out too badly.
"It's not as scenic here as at the Institute," he admits. "We only have a couple of trees out front, not a whole forest."
Not everyone, of course, thinks that cellular automata are the wave of the future. "Very pretty pictures," says Institute mathematician Deane Montgomery, rather dismissively.
"Wolfram has a dream that he's somehow or other going to understand complexity," says Freeman Dyson, "and that the complexity of the real world is mirrored in cellular automata. It's a big gamble."
The risk is that the connection between the patterns produced by cellular automata and those produced by nature is accidental rather than essential. It could all be a big coincidence. But Wolfram has seen too many parallels and deep connections between these abstract mechanisms and nature herself to think that all of it's going to evaporate away into a fog of sheer happenstance. The more he works, the deeper and wider those connections get--and Wolfram seems to do little else but work.
I last saw Stephen Wolfram at the Institute for Advanced Study one fine spring day in 1986. We were standing on the steps in front of Fuld Hall, looking out over the sprawling grassy acreage across which, in the old days, Toni Oppenheimer, Robert's young daughter, used to ride her horses. The sun was setting behind the trees, casting a deep orange glow on the high clouds, and Wolfram, as always, was telling me about his work, about his plans for cellular automata research, about the future of complex systems theory, about how he spends an average of thirteen days a month traveling to and from scientific conferences, and so on and so forth. As usual, I was bowled over, staggered, overwhelmed, by the man's intellect, and by his capacity for sheer work.
I used to wonder, so finally I had to ask.
"With all this constant traveling and work and science...uh, do you find the time for any social life? You know, girlfriends and such?"
"Oh yes," Wolfram said. "If you're interested in complex systems, there's nothing more complex than that."