Introduction to A New Kind of Science

Variants of this talk were given at many venues in late 2002 and early 2003, and at a few venues subsequently.  Among the venues were: • Apple Computer (4/15/03) • Argonne National Laboratory (10/16/02) • Arizona State University (1/8/04) • Boston University (3/17/03) • Show more venues » Brookhaven National Laboratory (3/6/03) • Brown University (10/22/02) • California Institute of Technology (Caltech) (2/1/03) • Carnegie Mellon University (10/3/02) • Columbia University (10/25/02) • Cornell University (10/2/02) • Dartmouth College (10/3/03) • Google (4/15/03) • IBM (3/13/03) • Indiana University (4/7/03) • Lawrence Livermore National Laboratory (4/14/03) • Massachusetts Institute of Technology (MIT) (4/24/03, 9/15/03) • MITRE Corporation (5/19/03) • NASA Ames Research Center (4/16/03) • NASA Goddard Space Flight Center (9/16/02) • National Institute of Standards and Technology (NIST) (9/17/02) • National Institutes of Health (NIH) (9/17/02) • New York University (NYU) (10/25/02) • Northwestern University (4/8/03) • Polytechnic University (3/5/03) • Rutgers University (2/18/05) • San Jose State University (2/11/03) • Simon Fraser University (11/22/02) • Smith College (10/6/04) • Stanford University (2/10/03) • Stony Brook University (SUNY Stony Brook) (3/7/03) • Tufts University (4/14/04) • University of California, Los Angeles (UCLA) (2/3/03) • University of California, San Diego (2/6/03) • University of Chicago (10/9/02) • University of Illinois at Urbana-Champaign (10/14/02) • University of Maryland, College Park (9/18/02) • University of Michigan (10/8/02) • University of Pittsburgh (10/4/02) • University of Vermont (9/30/05) • University of Washington (11/21/02) • University of Wisconsin-Madison (10/7/02) • U.S. Geological Survey (USGS) (2/13/03) • Vassar College (10/1/02) • Washington University (11/6/03) • Woods Hole Oceanographic Institution (7/23/03) « Show fewer venues

The last twenty-something years of my life can be pretty much summarized by these two big books.

MathematicaNew Kind of Science

Click the books to read them online.

And today I’m going to talk about this one.

New Kind of ScienceEnlarge Image

It’s about 1200 pages long. So that means that I only get to spend an average of about three seconds on each page. But I hope I’ll be able to communicate at least a flavor of what’s in it. And a little of the whole intellectual structure that I’ve spent the better part of the past twenty years building.

Well, back in the late 1970s I was a young physicist, mainly working on particle physics. But I also did a certain amount of work on cosmology. And from that I got interested in the question of how structures emerge in our universe—from galaxies on down. And I quickly realized that that was actually an instance of a much more general question: how does anything complicated get produced in nature? There are lots of everyday examples—snowflakes, turbulent fluid flows, forms of plants and animals… and lots of others. And my first assumption was that with all the sophisticated math I knew from particle physics and so on, I’d easily be able to figure out what was going on with these ordinary everyday systems. But when I actually tried to do it, it just didn’t seem to work. And gradually I started thinking that perhaps there might be a fundamental problem with the whole approach.

If one looks at history, the idea of using math and mathematical equations to understand nature has been sort of a defining feature of the exact sciences for perhaps 300 years. And it certainly worked out extremely well for Newton and friends in figuring out orbits of comets, and for lots and lots of things since then. But somehow when the behavior one’s looking at is more complicated, it just doesn’t seem to work so well. And gradually what I began to think was that that was actually why there had never really been a good theory for complicated processes in nature—in physics, and particularly in biology and so on. And I got to wondering about whether there might somehow be a way to go beyond the whole paradigm of using mathematical equations in thinking about nature.

That was around 1981. And it so happened that at that time I’d just developed a big software system called SMP that was in some ways a forerunner of Mathematica. And at the core of SMP was a computer language. And what I’d done to design that language was somehow to try to think about all the computations that people might want to do—and then to identify primitives that could be strung together to build up those computations.

Well, that had worked out fairly well—and in a much evolved form I think it’s worked out spectacularly well in Mathematica. But anyway, back in 1981 I had the idea that perhaps—just as I’d been able to find primitives for computations people want to do—I might somehow also be able to find primitives for what nature does.

And the crucial thing I realized is that those primitives don’t have to be based just on traditional mathematical constructs.

If one’s going to be able to do theoretical science, one has to assume that nature follows some kind of definite rules. But why should those rules involve only the kinds of constructs that have been invented in human mathematics—numbers, exponentials, calculus, and so on? Can’t the rules somehow be more general?

Well, in the past there wouldn’t really have been any systematic way to think about such things. But now we have computers—whose programs in effect implement arbitrarily general rules. And the idea I had was that perhaps the kinds of rules that can be embodied in programs might actually be what nature is using.

But what sorts of programs might be relevant? From knowing traditional science, I assumed they’d have to be at least somewhat complicated. But still, I figured I should at least start by looking at really simple programs.

In practical computing we’re used to long, complicated programs specifically set up for particular tasks. But what I wanted to know about were really simple, short, programs—say just chosen at random.

What do programs like that do? It’s sort of the simplest possible computer experiment: run really simple programs, and see what they do.

Well, back in 1981 I came up with a particularly convenient kind of program to try—that’s called a cellular automaton.

Here’s how a simple one-dimensional cellular automaton works.

One starts with a row of cells, each either black or white.  [See A New Kind of Science, page 24.]

Enlarge 
                  Image

Then the cellular automaton evolves down the page, with the color of each cell on each step being determined by a definite rule from the color of the cell and its neighbors on the step before.

 Play Animation

Well, in this particular case the rule is really simple. It just says that a cell will be black whenever it or its neighbors were black on the row before. And what happens is that we just get a simple uniform black pattern.  [See A New Kind of Science, page 24.]

Enlarge 
                  Image

Well we can use that icon at the bottom to represent the rule—the program—we were using. So what happens if we change the rule a bit?  [See A New Kind of Science, page 25.]

Enlarge 
                  Image

Well now, instead of getting just a uniform black pattern, we get a checkerboard.

But so far none of this is terribly surprising. We’re using very simple rules, and they’re making simple patterns.

Well, let’s try another rule.  [See A New Kind of Science, page 25.]

Enlarge 
                  Image

It’s the same setup as before. But it doesn’t seem to be making any kind of simple repeating pattern. Let’s run it a bit longer.  [See A New Kind of Science, page 26.]

Enlarge 
                  Image

Well, it gets pretty intricate. But we can see it’s very regular: it’s a self-similar fractal pattern—just formed from identical nested pieces.

And one might think that if one has a simple rule and starts from just one black cell, then one would always have to get a pattern that’s somehow very regular—like this.

At least, back in 1982 that’s what I assumed was true. But one day I decided to try a very systematic experiment, and just run every single one of the 256 possible simplest cellular automaton rules.

Well, when I got to rule number 30 this is what I saw.  [See A New Kind of Science, page 27.]

Enlarge 
                  Image

What’s going on here? Let’s run it some more.  [See A New Kind of Science, page 29.]

Enlarge 
                  Image

There’s a bit of regularity over on the left. But otherwise, this looks really complicated—actually kind of random.

Well, when I first saw this, I thought it might just be a problem with our visual system: that really there were regularities, but we just couldn’t see them. So I ran all sorts of elaborate mathematical and statistical tests. And what I found was that no, so far as I could tell, something like the center column of cells really was perfectly random.

Well, this is rather amazing. We have a very simple rule, and we’re starting off from a single black cell. But what we’re getting out is an incredibly complicated pattern—that seems in many ways random.

It just doesn’t seem right. We’ve put so little in, yet we’re getting so much out. It’s not what our ordinary intuition says should happen. I mean, in our everyday experience and, say, in doing engineering what we’re used to is that to make something complicated, we somehow have to start off with complicated plans or use complicated rules. But what we’re seeing here is that actually even extremely simple rules can produce incredibly complicated behavior.

Well, it took me years to come to terms with this phenomenon. And in fact, it’s gradually overturned almost everything I thought I knew about the foundations of science. And it’s what’s led me to spend the past nearly 20 years building a new intellectual structure—really a new kind of science.

Well, OK, having seen rule 30, what more can happen? Here’s another of the 256 simplest cellular automata; this is rule 110.  [See A New Kind of Science, page 32.]

Enlarge 
                  Image

This one grows only on the left. But what’s it doing? Let’s run it a little longer.  [See A New Kind of Science, page 33.]

Enlarge 
                  Image

It’s not making any kind of uniform randomness. Instead, we can see complicated little structures running around. But what’s going to happen here? The only way to find out seems to be just to keep running the system.  [See A New Kind of Science, pages 34–38.]

Enlarge 
                  Image

Enlarge 
                  Image

Enlarge 
                  Image

Enlarge 
                  Image

Are we going to get more of those little structures, or are they all going to die out? Or what’s going to happen? Well, after 2780 steps we finally get the answer: at least in this case, we basically end up with just one structure. But it’s just amazing everything we get just starting from one little black cell, and then following a really simple rule.

Well, in the mid-1980s I’d studied a bunch of cellular automata, and I’d seen these basic phenomena. But I wasn’t sure whether they were somehow specific to cellular automata, or more general. And I had the practical problem that there wasn’t any easy way to set up all the very different types of computer experiments that I’d have to do to find out. And in fact that was one of the main reasons I started building Mathematica: I wanted once and for all to make a single system that I could use to do all the computations and all the computer experiments I’d ever want. And for five years the task of building Mathematica and our company pretty much consumed me. But finally in 1991 I decided I could start looking at my science questions again. And it was really incredible. Things that had taken me days to do before now took minutes—and it was easy to do all the experiments I wanted. I guess it was a little like how it must have felt when telescopes or microscopes were first invented. One could just point them somewhere, and almost immediately see a whole world of new phenomena—like moons of Jupiter, or microscopic creatures in a drop of pond water. Well, in the computational world, one just pointed Mathematica somewhere, and suddenly one could see all this amazing stuff. So the first question was: just how special are cellular automata? Well, I looked at lots of cellular automata, and often saw very complicated behavior.  [See A New Kind of Science, page 67.]

Play Animation

But what about programs with different setups? Here’s a Turing machine for example.  [See A New Kind of Science, page 78.]

Enlarge 
                  Image

It’s got a row of cells, like a cellular automaton. But unlike a cellular automaton, only one cell gets updated at each step, but that cell moves around. Well, the first quite a few Turing machines only do these kinds of things.  [See A New Kind of Science, page 79.]

Enlarge 
                  Image

But if one keeps going—still with very simple rules—then suddenly one sees this.  [See A New Kind of Science, page 81.]

Enlarge 
                  Image

The same kind of complexity that we saw in cellular automata.

So what about other kinds of programs? Well, I looked at many, many different kinds. Sequential substitution systems that rewrite strings like an iterated text editor.  [See A New Kind of Science, page 92.]

Enlarge 
                  Image

Register machines—kind of like minimal idealizations of machine language.  [See A New Kind of Science, page 100.]

Enlarge Image

Also symbolic systems—like generalizations of combinators or kind of minimal idealizations of Mathematica.  [See A New Kind of Science, page 104.]

Enlarge 
                  Image

It’s the same story in two dimensions. In cellular automata.  [See A New Kind of Science, page 174.]

Enlarge 
                  Image

Or in Turing machines.  [See A New Kind of Science, page 185.]

Enlarge 
                  Image

Or in three dimensions.  [See A New Kind of Science, page 183].

Enlarge 
                  Image

Every time one sees the same thing: even with very simple rules one can get extremely complicated behavior. And one doesn’t even need rules that specify an explicit process of evolution: constraints work too. Like here’s the simplest set of tiling constraints that force a non-periodic pattern.  [See A New Kind of Science, page 219.]

Enlarge 
                  Image

And here are constraints that give a kind of “crystal” that’s forced to have a rule 30 pattern.  [See A New Kind of Science, page 221.]

Enlarge 
                  Image

Wherever one looks, it’s always the same basic thing: incredibly simple rules can give incredibly complicated behavior. It seems to be a very robust—and very general—phenomenon.  [See A New Kind of Science, page 29.]

Enlarge 
                  Image

But how come something that fundamental hadn’t been known for ages? Well, partly it’s because one can only easily see it by doing lots of computer experiments—and it’s only with computers, and with Mathematica, that those have become easy to do. But more important is that with our ordinary intuition there just didn’t seem to be any reason to try the experiments; it seemed so obvious they wouldn’t show anything interesting. Well now that one’s made the clear discovery that simple programs—simple rules—can produce complicated behavior, one can go back and find all sorts of hints about it from really long ago. Nearly 2500 years ago the Greeks looked for examples in prime numbers. And there’s a fairly simple rule—a fairly simple program—for generating all the primes. But the sequence of primes, once generated, looks remarkably irregular and complicated.  [See A New Kind of Science, page 132.]

Enlarge 
                  Image

There’s also a fairly simple rule for generating the digits of pi. But once generated, that sequence looks to us completely random.  [See A New Kind of Science, pages 136–137.]

Enlarge 
                  Image

And actually, there are lots of cases of this kind of thing with numbers. Here’s what happens if you just write out successive powers of 3 in base 2.  [See A New Kind of Science, page 120.]

Enlarge 
                  Image

Kind of a minimal version of a linear congruential random number generator. And already amazingly complicated. You can also make very complicated sequences from simple integer recurrences.  [See A New Kind of Science, page 130.]

Enlarge 
                  Image

And here for example is the very simplest primitive recursive function that has complex behavior.  [See A New Kind of Science, pages 907–908.]

Enlarge 
                  Image

Well, what about other systems based on numbers? What about for example those favorite things from traditional mathematical science: partial differential equations? Does their continuous character make them work differently? Well, the ones people usually study show only rather simple behavior.  [See A New Kind of Science, page 163.]

Enlarge 
                  Image

But in the space of all possible symbolic equations, I found these.  [See A New Kind of Science, page 165.]

Enlarge 
                  Image

They’re just simple nonlinear PDEs. But even with very simple initial conditions, they end up doing all sorts of complicated things.

Well, actually, it’s a little hard to tell precisely what they do. These are great tests for our state-of-the-art PDE-solving capabilities in Mathematica. But with a continuous system like this there’s always a problem. Because you end up having to discretize it, and without already more or less knowing the answer, it’s hard to tell whether what you’re seeing is something real. And that’s why among mathematicians numerical experiments can sometimes have kind of a bad name.

But in a discrete system like rule 30, the bits are just the bits, and so one can tell one’s seeing a real phenomenon. And actually, rule 30 is so simple to set up, there’s probably no reason the Babylonians couldn’t have done it.  [See A New Kind of Science, page 30.]

Enlarge 
                  Image

And sometimes I wonder whether—a bit like these—an ancient mosaic of rule 30 might one day be unearthed. But I rather think that if rule 30 had actually been known in antiquity, a lot of ideas about science and nature would have developed somewhat differently. Because as it is, it’s always seemed like a big mystery how nature could manage—apparently so effortlessly—to produce so much that seems to us so complex. It’s like nature has some special secret that allows it to make things that are much more complex than we as humans normally build. And often it’s seemed that that must be evidence that there’s something somehow beyond human intelligence involved.  [See A New Kind of Science, page 43.]

Enlarge 
                  Image

But once one’s seen rule 30, it suggests a very different explanation.  [See A New Kind of Science, page 30.]

Enlarge 
                  Image

It suggests that all it takes is for things in nature to follow the rules of typical simple programs. And then it’s almost inevitable that—like in the case of rule 30—their behavior can be highly complex. The way we as humans are used to doing engineering and to building things we tend to operate under the constraint that we have to foresee what the things we’re building are going to do. And that means that we’ve ended up being forced to use only a very special set of programs that happen always to have simple foreseeable behavior. But the point is that nature is presumably under no such constraint. So that means that there’s nothing wrong with it using something like rule 30—and that way inevitably producing all sorts of complexity.

Well, once one starts thinking in terms of simple programs it’s actually amazing how easily one can start to understand the essential features of what’s going on in all sorts of systems in nature.

As a simple example, let’s talk about snowflakes.  [See A New Kind of Science, page 370.]

Enlarge 
                  Image

Well, crystals grow by starting from a seed, then successively adding pieces of solid. And one can try to capture that with a simple two-dimensional cellular automaton. Imagine a grid, where every cell either has solid in it or not. Then start from a seed and have a rule that says solid will be added at any cell that’s adjacent to one that’s already solid. Here’s what one gets.  [See A New Kind of Science, page 369.]

Play Animation

It’s an ordinary-looking faceted crystal, here on a square grid. One can do the same thing on a hexagonal grid.  [See A New Kind of Science, page 369.]

Play Animation

But for snowflakes there’s an important effect missing here. When a piece of ice solidifies, there’s some latent heat released. And that inhibits more ice nearby. Well, what’s the simplest way to capture that effect? Just change the cellular automaton to say that ice gets added only if there’s exactly one neighboring cell of ice. OK, so what happens then? Well here’s the answer.  [See A New Kind of Science, page 371.]

Play Animation

These are all the stages.  [See A New Kind of Science, page 371.]

Enlarge 
                  Image

And these look an awful lot like real snowflakes. It really seems like we’ve captured the basic mechanism that makes snowflakes have the shapes they do. And we get various predictions: like that big snowflakes will have little holes in them. Which indeed they do.

OK, but even though our pictures look a lot like real snowflakes, there are obviously details that are different. But the thing one has to understand is that that’s what has to happen with any model. Because the whole point of a model is to capture certain essential features of a system—and then to idealize away everything else. And depending on what one’s interested in, one may pick out different features to capture.

And so the cellular automaton model is good, for example, if one’s interested in the basic question of why snowflakes have complicated shapes—or what the distribution of shapes in some snowflake javascript:population will be. But it’s not so useful if one’s trying to figure out specifically how fast each arm will grow at a certain temperature.

I might say that there’s actually a general confusion about modeling that often seems to surface when people hear about cellular automata. They’ll say: OK, it’s all very nice that cellular automata can reproduce what snowflakes do, but of course real snowflakes aren’t actually made from cellular automaton cells. Well, the whole point of a model is that it’s supposed to be an abstract way of reproducing what a system does; it’s not supposed to be the system itself. I mean, when we have differential equations that describe how the Earth moves around the Sun, we don’t imagine that inside the Earth there are all sorts of little Mathematicas solving differential equations. Instead, it’s just that the differential equations represent—abstractly—the way the Earth moves.

And it’s exactly the same thing with models based on cellular automata: the cells and rules abstractly represent certain features of a system. And again, what abstract representation—what type of model—is best depends on what one’s interested in. For snowflakes there are certainly traditional differential equations that could be used. But they’re complicated and hard to solve. And if what one’s actually interested in is the basic question of why snowflakes have complicated shapes, the cellular automaton model is a much better way to get at that and to work out predictions about it.

OK, let’s take another example. Let’s talk about fluid turbulence.  [See A New Kind of Science, page 377.]

Enlarge 
                  Image

Whenever there’s an obstacle in a fast-moving fluid, the pattern of flow around it looks complicated and quite random. But where does that randomness come from?

Well, one can ask the same question about randomness in any system. And I think there are really three basic ways randomness can arise.

The most traditional is that randomness can come from external perturbations—from the environment. Like with a boat bobbing on an ocean. The boat itself doesn’t produce randomness, but it moves randomly because it’s exposed to the randomness of the ocean. It’s the same kind of thing with Brownian motion, or with electronic noise.

Well, another way to get randomness that’s come up more recently is what’s usually called chaos theory. And the key idea there is that randomness isn’t continually injected into a system, but instead comes from the details of the initial conditions. Think about tossing a coin or spinning a wheel. Once it’s started, there’s no randomness in how it moves. But which way it’ll be pointed when it stops depends sensitively on what its initial speed was. And if it was started say by hand, there’ll always be a little randomness in that—so the outcome will seem random.

Well, there are more elaborate versions of this, where one’s effectively taking numbers that represent initial conditions, and successively excavating higher and higher order digits in them. And from the perspective of ordinary continuous mathematics, it’s a bit tricky to account for where the randomness comes from here. But in terms of programs it becomes very clear that any randomness is just randomness one put in, in the detailed pattern of digits in the initial conditions. So again one ends up saying that “randomness comes from outside the system one’s looking at.”

OK, well. So is there any other possibility? Well, it turns out there is. Just look at rule 30.  [See A New Kind of Science, page 30.]

Enlarge 
                  Image

Here one doesn’t have any randomness initially. One just has a single black cell. And one doesn’t have any subsequent input from the outside. But what happens is that the evolution of the system just intrinsically generates apparent randomness.

OK, so what about fluid turbulence? Where does the randomness come from there? With the traditional differential equations way of modeling fluids it’s very difficult to find out. But one can make a simple cellular automaton model where it’s much easier. Remember that at the lowest level a fluid just consists of a bunch of molecules bouncing around. And actually we know that the details aren’t too important. Because air and water and all sorts of other fluids that have completely different microscopic structures still show the same continuum fluid behavior. So knowing that we can try to just make a minimal model for the underlying molecules. Just have them be on a discrete grid, with discrete velocities, and so on.  [See A New Kind of Science, page 378 and page 380.]

Enlarge 
                  Image

Enlarge Image

Well, if one does that one both gets a rather nice practical way to do fluid dynamics, and one can start addressing some fundamental questions. And what one seems to find is that there’s no need for randomness from the environment or randomness from initial conditions; one can get randomness from intrinsic randomness generation—from something like rule 30.  [See A New Kind of Science, page 30.]

Enlarge 
                  Image

Well, OK. So what? If one’s trying to make models of fluid turbulence it’s kind of important to know where the randomness in it comes from. And intrinsic randomness generation makes at least one immediate prediction: it says that in a carefully enough controlled experiment, the turbulence should be exactly repeatable. See, if the randomness comes from the environment or from details of initial conditions, it’ll inevitably be different in different runs of an experiment. But if it’s like rule 30, then it’ll always be the same every time one runs the experiment.

Well, one can ask about randomness in all sorts of systems. Like in finance, for example. Where the most obvious feature of almost any market is that prices seem to fluctuate quite randomly. But where does that randomness come from? Some of it is surely from the environment. But some of it is probably intrinsically generated. And knowing that is important if you want for example to know if you can predict it.

Well, back in physics one place randomness has been discussed a lot is the Second Law of Thermodynamics—the law of entropy increase. A lot is known about it, but there’s still a basic mystery about how the laws of physics can be reversible, yet in our everyday experience we see so much apparent irreversibility. And I think that intrinsic randomness generation finally gives one a way to explain that. It’s a slightly long story, but it’s basically that things like rule 30 can so encrypt the data associated with initial conditions that no realistic experiments or observations can ever decode it to see how to go backwards.

Well, OK. That’s a little about physics. What about biology? Well, there’s certainly a lot of complexity in biology. And people often assume that it’s kind of a higher level than in physics. And usually they figure that’s somehow because of adaptation and natural selection. But in reality it’s never been clear why natural selection should actually lead to much complexity at all. And that’s probably why at least outside of science many people have thought there must be something else going on. But the question is: what is it? Well, I think that actually it’s just the abstract fact—that we discovered with rule 30 and so on—that among simple programs it’s actually very easy to get complexity. Of course, the complete genetic program for a typical organism is pretty long and complicated—for us humans about the same length as the source code for Mathematica. But it’s increasingly clear that lots of the most obvious aspects of forms and patterns in biology are actually governed by rather small programs. And looking at some of the kinds of regularities that one sees in biological systems, that doesn’t seem too surprising. But when one sees more complicated stuff, traditional intuition tends to suggest that somehow it must have been difficult to get. And with the natural selection picture, there’s sort of the idea that it must be the result of a long and difficult process of optimization—or of trying to fit into some complicated ecological niche.  [See A New Kind of Science, page 385.]

Enlarge 
                  Image

Well, I actually think that that’s not where many of the most obvious examples of complexity in biology come from. Natural selection seems to be quite good at operating on small numbers of smooth parameters—lengthening one bone and shortening another, and so on. But when there’s more complexity involved, it’s very hard for natural selection to operate well. And instead, what I think one ends up seeing is much more just the outcome of typical randomly chosen genetic programs. Let me give you an example. Here are some mollusc shells with pretty complicated pigmentation patterns on them.  [See A New Kind of Science, page 423.]

Enlarge Image

Well, in the past one might have assumed that to get things as complicated as this must be difficult. And that somehow it must be the result of a sophisticated biological optimization process. But if you look at these patterns, they look incredibly similar to patterns we get from cellular automata like rule 30. Well, in the actual shell, the pattern is laid down by a line of pigment-producing cells on the growing edge of the shell. And it seems that what happens can be captured rather well by a cellular automaton rule. So why one rule and not another? If one looks at different species, one sees all sorts of different patterns.  [See A New Kind of Science, page 423.]

Enlarge 
                  Image

But there are definite classes. And amazingly, they correspond extremely well with the classes of behavior in cellular automata.  [See A New Kind of Science, page 424.]

Enlarge 
                  Image

It really is as if the molluscs of the world are just sampling the space of possible simple programs—and then displaying the results on their shells.

With all the emphasis on natural selection, one’s kind of gotten used to the idea that there just can’t be much of a fundamental theory in biology—and that the things one sees in current organisms must just reflect detailed accidents in the history of biological evolution. But what the mollusc shell example suggests is that things might be different, and that instead it might be reasonable to think of different types of organisms as somehow uniformly sampling a whole space of possible programs.

Let me give one more example. Shapes of leaves.  [See A New Kind of Science, page 403.]

Enlarge 
                  Image

One might think they’re too diverse to explain in any uniform way. But actually it turns out that there’s a simple type of program that seems to capture almost all of them. It just involves successive repeated branchings.  [See A New Kind of Science, page 400.]

Enlarge 
                  Image

And what’s remarkable is that the limiting shapes one gets look just like actual leaves, sometimes smooth, sometimes jagged, and so on.  [See A New Kind of Science, page 402.]

Enlarge 
                  Image

Well, here’s a simple case where one can actually lay out all of the possible shapes that one gets.  [See A New Kind of Science, page 406.]

Enlarge 
                  Image

And one thing one can see is that just changing a parameter can change completely what type of leaf one gets—and how it might function biologically. Well, to be a little more sophisticated one can summarize features of possible leaves in a parameter space set—that turns out to be a rather wonderful simpler, linear, analog of the Mandelbrot set.  [See A New Kind of Science, page 407.]

Enlarge 
                  Image

And from the properties of this set one can deduce all sorts of features of leaves and their likely evolution. Well, OK. There’s a lot to say about how one can use ideas about simple programs to capture what’s going on in biology, both at the macroscopic levels of the kind I’ve been talking about, and at molecular levels. You know, if one looks at the history of biology, there’s an interesting analogy. About fifty years ago, there was all this data on genetics and heredity, and it seemed really complicated. But then along came the idea of digital information in DNA, and suddenly one could see a mechanism.

Well, there are now a bunch of people who think that my science might lead us to the same kind of thing for big issues in biology today, like aging and cancer. And that somehow by thinking in terms of simple programs, one may be able to see the right primitives, and tell what’s going on. Which of course would be very exciting. But let me now turn for a few minutes back to physics—and in particular fundamental physics.

Well, traditional mathematical approaches have obviously had lots of success there. But still they haven’t been able to give us a truly fundamental theory of physics. And I suspect that the reason for that is that really one needs more primitives: not just ones from traditional mathematics, but also the more general ones that one can have in programs. And now that we’ve seen that very simple programs can produce immensely rich and complex behavior one can’t help wondering whether perhaps all the amazing things we see in our universe couldn’t just be the result of some particular simple program. That’d be pretty exciting. To have a little program that’s a precise ultimate model of our universe. So that if one just runs that program long enough it’ll reproduce every single thing that happens in our universe. But, OK, what might such a program be like? Well, one thing that’s kind of inevitable is that very few familiar features of our universe will immediately be visible in the program. There just isn’t room. I mean, if the program is small, there’s just no way to fit in separate identifiable pieces that represent electrons, or gravity, or even space or time. And in fact I think that if the program’s going to be really small, it sort of has to have the very least possible structure already built in. And for example I think a cellular automaton already has far too much structure built in. For example, it’s got a whole rigid array of cells laid out in space. And it also separates the notion of space from the notion of states of cells. And I don’t think one needs that. I mean, in ordinary physics, space is a kind of background—on top of which matter and everything exists. But I think that in an ultimate model one only needs space—and one doesn’t need any other basic concepts.

Well, OK, so given that, what might space be? We normally think of space as just being something that is—not something that has any kind of underlying structure. But I think it’s a little like what happens with fluids. Our everyday experience is that something like water is a continuous fluid. But we actually know that underneath, it’s made up of lots of little discrete molecules. And I think something similar is happening with space. And that at a small enough scale, space is just a huge collection of discrete points. Actually, really, a giant network—with a changing pattern of connections between points. Where all that’s specified is how each node—each point—is connected to others. Here’s a small example.  [See A New Kind of Science, page 1039.]

Enlarge 
                  Image

But how can anything like space as we know it come from that? Well, actually it’s quite easy. In fact, here are networks that correspond to one- and two- and three-dimensional space.  [See A New Kind of Science, page 477.]

Enlarge 
                  Image

And actually these are all the same kind of network: the only thing that’s different is the pattern of connections in them.

Well, I’ve drawn these to make the correspondence with ordinary space clear. But there’s intrinsically absolutely no information in the network about how it should be drawn. It’s just a bunch of nodes, with a certain pattern of connections.

Well, OK. So how do we even tell whether we’re in a certain dimension of space? It’s actually quite easy. Just start at a node, and go out r connections from it.  [See A New Kind of Science, page 477.]

Enlarge 
                  Image

And count how many nodes one reaches. In two dimensions, it’ll be roughly the area of a circle: pi r2. In three dimensions, the volume of a sphere. And in d dimensions, something that grows like rd. So given a network, that’s how one can tell whether it’s like a certain dimension of space.  [See A New Kind of Science, page 479.]

Enlarge 
                  Image

But OK, what about time? In the usual mathematical formulation of physics, space and time are always very much the same kind of thing—just different variables corresponding to different dimensions. But when one looks at programs, they seem much more different. I mean, in a cellular automaton, for example, one moves in space by going from one cell to another. But one moves in time by actually applying the cellular automaton rule. OK. So can space and time really be that different? It’s all rather tricky, but I think at the lowest level they really are. It’s definitely not like in a cellular automaton though. Because in a cellular automaton there’s some kind of global clock—with every cell getting updated in parallel at every tick.

Well, it’s hard to imagine such a global clock in our universe. So what might actually be going on? Well, here’s something that at first seems crazy. How about if the universe works a bit like a Turing machine—or what I call a mobile automaton—where at each step there’s only one cell that’s getting updated?  [See A New Kind of Science, page 487.]

Enlarge 
                  Image

There’s no problem with synchronization here, because there’s only one place where anything happens at a time. But how can it possibly be right? After all, we normally have the impression that everything sort of goes through time together. I certainly don’t have the impression that what’s happening for example is that first I’m getting updated, then you’re getting updated, and so on. But the point is: how would I know? Because until I’m updated I can’t tell whether you’ve been updated or not.

Well, following that all the way through one realizes that all that we can ever actually know about in the end is a kind of a causal network of what event influences what other event. And here’s an example of how one can go from an underlying sequence of updates—in this case in a mobile automaton—to a causal network. And the important thing here is that even though the updates only affect one cell at a time, the final causal network corresponds to something kind of uniform in spacetime.  [See A New Kind of Science, page 489.]

Enlarge 
                  Image

OK, so how might time work in the context of the space networks I talked about before? The obvious thing is to imagine that there’s an updating rule that says that whenever there’s a piece of network that has a particular form, it should get replaced by a piece of network with another form. So here are some examples of rules like that.  [See A New Kind of Science, page 509.]

Enlarge 
                  Image

But there’s an immediate problem: if there are several places in the network where a particular rule could apply, where should one update first? Well, in general, different updating orders will lead to different causal networks. And one will sort of get a whole tree of possible histories for the universe. And then to say what actually happens in our universe, one’ll somehow have to have more information, to say which branch one’s on. Well, I don’t consider that very plausible. And it turns out there’s another, rather subtle, possibility. It turns out that with the appropriate kinds of underlying rules it actually doesn’t matter what order they get applied in: there’s what I call causal invariance, that means that the causal network one gets out is always the same. Technically, this is related to so-called confluence in rewrite systems. As well as to the way Mathematica manages to find canonical forms for expressions. But what it means is that when one uses rules based for example on graphs like these, one has causal invariance, and so in a sense there’s always just a single thread of time in the universe.  [See A New Kind of Science, page 515.]

Enlarge 
                  Image

Well, OK, in addition to making there be a single thread of time, this has another important consequence: it turns out to imply that special relativity must hold. It’s a slightly complicated story, but for those who know about such things, let me just say that different updating orders correspond to different spacelike hypersurfaces.  [See A New Kind of Science, page 516.]

Enlarge 
                  Image

And that’s why—subject to various tricky issues of limits and averages and things—causal invariance implies relativistic invariance.

Well, OK. Let’s go on. What about general relativity—the standard theory for gravity? One needs to start off by talking about curvature in space. Well, here’s an example of a network that corresponds to flat two-dimensional space.  [See A New Kind of Science, page 532.]

Enlarge 
                  Image

But if we change the connections to mix in some heptagons or pentagons we get a space that has curvature.  [See A New Kind of Science, page 532.]

Enlarge 
                  Image

Remember that in two dimensions the number of nodes we get by distance r grows like r2. Well, in ordinary flat space it’s exactly r2. But in curved space there’s a correction. Proportional to what’s called the Ricci scalar. Well, that’s already kind of interesting—because the Ricci scalar is something that appears in the Einstein equations for general relativity.

Well, the whole story is quite complicated. We need to look at curvature not just in space but in spacetime defined by causal networks. But it then turns out that the growth rates of volumes of spacetime cones are related to the so-called Ricci tensor. And then—with certain microscopic randomness and other conditions—one can derive conditions on the Ricci tensor. And—guess what—they seem to be exactly the Einstein equations. There are many issues and caveats. But it’s rather exciting. It seems like from almost nothing one’s been able to derive a major feature of our universe, namely general relativity and gravity.

OK, so another major thing in our universe is particles, like electrons and photons and so on. Well, remember that all we have in the universe is space. So how can we get particles? Well, here’s how it can work in something like a cellular automaton. This happens to be our friend rule 110.  [See A New Kind of Science, page 229.]

Enlarge 
                  Image

It starts off from random initial colors of cells. But it quickly organizes into a few persistent localized structures. That act just like particles. Like here are a couple of collisions between them.  [See A New Kind of Science, page 295.]

Enlarge 
                  Image

Two particles come in, and after some interactions, a bunch of particles come out. It’s just like an explicit version of a Feynman diagram in particle physics.

Well, OK, so what about in a network? It’s the same kind of thing. Where the particles are like definite little tangles—like this nonplanar piece in an otherwise planar network.  [See A New Kind of Science, page 527.]

Enlarge 
                  Image

Well, talking about particles brings up quantum mechanics. And we can already see a little hint of quantumness right here. A sort of delocalization, where we can move that particle tangle around just by redrawing the network. But quantum mechanics is really a big bundle of mathematical results and constructs. And actually, I suspect it’ll be easier to go from my basic theory to quantum field theory than to quantum mechanics—just like it’s easier to go from molecular dynamics to fluid mechanics than to rigid-body mechanics. But one general feature of quantum mechanics is that it has fundamental randomness built in. Yet my theory is completely deterministic. But the point is that it makes its own randomness—like in rule 30. And actually that randomness isn’t only crucial in giving quantum mechanics; it’s also needed just to build up space and time. Still, one might think that with determinism underneath, one could never get the kind of special correlations that are observed in quantum mechanics. But it turns out that that the theorems about that rely on implicit assumptions about spacetime. Which aren’t true for networks. Where there can be long-distance threads that connect particles, sort of outside the usual 3+1-dimensional spacetime structure.

And, you know, it’s amazing how many sophisticated features of known physics seem to come out of these simple network programs I’ve been talking about. And I must say that it all makes me increasingly hopeful that it’ll really be possible to find a single simple program that really is the ultimate program for the universe. There are lots of technical difficulties, and a lot of tools that have to built. And in fact you can watch for those in future versions of Mathematica. But I think in the end it’s going to work. And it’s going to be pretty exciting.

Well, OK. I want to come back now to the original discovery that really launched everything I’ve been talking about: the discovery that even simple programs—like rule 30—can produce immensely complex behavior.  [See A New Kind of Science, page 30.]

Enlarge 
                  Image

So why does that happen? What’s the fundamental reason? To answer that one needs to set up a somewhat new conceptual framework. And the basis of that is to think about all processes as computations. The initial conditions for a system are the input. And the behavior that’s generated is the output. Well, sometimes the computations are ones that we—kind of—immediately know the point of. Like here’s a cellular automaton that computes the square of any number.  [See A New Kind of Science, page 639.]

Enlarge 
                  Image

You give it a block of n cells, and it generates a block of n2 cells. And here’s a cellular automaton that generates the primes.  [See A New Kind of Science, page 640.]

Enlarge 
                  Image

But actually any cellular automaton can be thought of as doing a computation. It just isn’t necessarily a computation that we know the point of beforehand. OK, so we have all sorts of systems and they do all sorts of computations. But how do all these computations compare? Well, we might have thought that every different system would always do a completely different kind of computation. So that only adding machines would do addition. And there’d be a different multiplying machine to do multiplication. But the remarkable idea that’s now about 70 years old is that no, that’s not necessary. Instead, it’s possible to make a universal machine that can do any computation if it’s just fed the right input. And of course that’s been a pretty important idea. Because it’s the idea that makes software possible, and really it’s the idea that launched the whole computer revolution. Yet in the past that idea never had much fundamental effect on natural science. But one of the things I’ve realized is that it’s actually critically important there too.

OK, so let’s talk about what it means to be a universal system. Basically, it’s that with appropriate input, the system can be programmed to act like any other system. Here’s an example of that: a universal cellular automaton.  [See A New Kind of Science, page 645.]

Enlarge 
                  Image

And the idea is that by changing initial conditions, this single cellular automaton can be made to act like any other cellular automaton. OK. So there it’s behaving like rule 254, which happens to make a simple uniform pattern. Here’s it behaving instead like rule 90.  [See A New Kind of Science, page 646.]

Enlarge 
                  Image

And here it’s behaving like rule 30.  [See A New Kind of Science, page 647.]

Enlarge 
                  Image

And remember, each of these pictures is of the same universal cellular automaton, with the same underlying rules. But what’s happening is that by giving different initial conditions, we’re effectively programming the universal cellular automaton to emulate all sorts of other cellular automata. And in fact it’s able to emulate absolutely any other cellular automaton, with any rules whatsoever.

Now one might have thought that a system would only be able to emulate systems that were somehow simpler than itself. But universality means that we can have a fixed system that can emulate any other system, however complicated it may be.

So in a sense, once one’s got a universal system, one’s maxed out from a computational point of view—one’s got something that can do essentially any computation, however sophisticated.

OK, so what about all these various cellular automata like rule 30?  [See A New Kind of Science, page 30.]

Enlarge 
                  Image

Or all the systems we see in nature? How sophisticated are the computations that they’re doing?

Well, I spent a long time thinking about this and accumulating all sorts of evidence. And what I ended up concluding is something that at first seems pretty surprising. I call it the Principle of Computational Equivalence. It’s a very general principle, and in its roughest form what it says is this: that essentially any time the behavior of a system looks to us complex, it will end up corresponding to a computation of exactly equivalent sophistication.

If we see behavior that’s repetitive or nested then it’s pretty obvious that it corresponds to a simple computation.  [See A New Kind of Science, page 24.]

Enlarge 
                  Image

But what the Principle of Computational Equivalence says is that when we don’t see those kinds of regularities, we’re almost always dealing with a process that’s in a sense maximally computationally sophisticated. Now at first that’s pretty surprising. Because we might have thought that the sophistication of the computations that get done would depend on the sophistication of the rules that got put in. But the Principle of Computational Equivalence says it doesn’t. And that immediately gives us a prediction. It says that even though their rules are extremely simple, systems like rule 30 should be computation universal.  [See A New Kind of Science, page 30.]

Enlarge 
                  Image

Well, normally we’d imagine that to achieve something as sophisticated as computation universality we’d somehow need sophisticated underlying rules. And certainly the computers we use that are universal have CPU chips with millions of gates, and so on. But the Principle of Computational Equivalence says you don’t need all of that. It says that even cellular automata with very simple rules should be universal. Well, here’s one them. This is rule 110. I showed it earlier.  [See A New Kind of Science, page 229.]

Enlarge 
                  Image

It’s got a really simple rule. But as you can see, it does some fairly complicated things. It’s got all these structures running around that seem like they might be doing logic operations or something. But can one really assemble them to get something one can see is universal? Well, after a lot of painstaking work, one finds that one can. And that means that rule 110 is indeed universal!

Well, that’s just what the Principle of Computational Equivalence said should be true. But it’s really a remarkable thing. Because it means that this little rule

 Enlarge 
                  Image

can in effect produce behavior that’s as complex as any system.  [See A New Kind of Science, page 32.]  One doesn’t need anything like a whole computer CPU to do universal computation. One just needs this little rule. And that has some very important consequences when it comes to thinking about nature. Because we wouldn’t expect to find whole computer CPUs just lying around in nature, but we definitely can expect to find things with rules like rule 110. And that means for example that lots of everyday systems in nature are likely to be universal.

By the way, for 40 years this was the simplest Turing machine that was known to be universal. [See A New Kind of Science, page 706.]

Enlarge 
                  Image

But now we know that this much simpler Turing machine is actually universal.  [See A New Kind of Science, page 707.]

Enlarge 
                  Image

And in fact I suspect that one can go even further—and that this little thing is the very simplest Turing machine that’s universal.  [See A New Kind of Science, page 709.]

Enlarge 
                  Image

Well, there’s a lot to say about what the Principle of Computational Equivalence is, and what it means. One thing it does is to make Church’s thesis definite by saying that there really is a hard upper limit on the computations that can be done in our universe. But the place where the principle really starts to get teeth is when it says that not only is there an upper limit—but that limit is actually reached most of the time. With incredibly simple rules one’ll often get just simple behavior that’s, say, repetitive or nested. But the point is that if one makes the rules even a tiny bit more complicated, then the Principle of Computational Equivalence says that one immediately crosses a threshold—and ends up with a system that’s maximally computationally sophisticated.

Actually the principle goes even further than that. Normally when one talks about universal computation one imagines being able to set up whatever initial conditions one wants. But the Principle of Computational Equivalence says that that’s not necessary. Because it says that even when the initial conditions are simple, there’ll still usually be maximally sophisticated computation going on.

OK, so what does this all mean? Well, first of all it gives us a way to answer the original question of how something like rule 30 manages to show behavior that seems so complex.  [See A New Kind of Science, page 30.]

Enlarge 
                  Image

The point is that there’s always a competition between an observer and a system they’re observing. And if the observer is somehow computationally more sophisticated than the system, then they can in a sense decode what the system is doing—so it’ll look simple to them. But what the Principle of Computational Equivalence says is that in most cases the observer will be exactly computationally equivalent to the system they’re observing. And that’s then why the behavior of the system will inevitably seem to them complex.

Well, a related consequence of the Principle of Computational Equivalence is a very important phenomenon that I call computational irreducibility. Let’s say you know the rules and the initial condition for a system. Well, then you can certainly work out what the system will do by explicitly running it. But the question is whether you can somehow shortcut that process. Can you for example just work out a formula for what will happen in the system, without ever explicitly having to trace each step? If you can, then what it means is that you can figure out what the system will do with a lot less computational effort than it takes the system itself. And that kind of computational reducibility is at the core of most traditional theoretical science. If you want to work out where an idealized Earth will be a million years from now, you don’t have to trace all its million orbits; you just have to plug a number into a formula and get out a result. But the problem is: what happens when the behavior is more complex? If a system is repetitive—or even nested—it’s easy to shortcut things. But what about a case like this?  [See A New Kind of Science, page 740.]

Enlarge 
                  Image

There’s certainly no obvious way to shortcut this. And in fact I think it’s computationally irreducible: there’s essentially no way to work out what the system will do by any procedure that takes less computational effort than just running the system and seeing what happens. In traditional theoretical science, there’s sort of been an idealization made that the observer is infinitely computationally powerful relative to the system they’re observing. But the point is that when there’s complex behavior, the Principle of Computational Equivalence says that instead the system is just as computationally sophisticated as the observer. And that’s what leads to computational irreducibility. And that’s in a sense why traditional theoretical science hasn’t been able to make more progress when one sees complexity. There are always pockets of reducibility where one can make progress, but there’s always a core of computational irreducibility.

Well, I think computational irreducibility is a pretty important phenomenon. That’s relevant even beyond what are normally viewed as purely scientific issues. Like for example to the problem of free will. I mean, it’s always seemed mysterious how we manage to act in ways that seem free of obvious predictive laws if it’s the case that our brains actually follow definite underlying laws. But I think a crucial ingredient of the answer is computational irreducibility: that even with definite underlying laws, there can still be no effective way to predict what a system will do, except in effect just by running the system and seeing what happens.

Well, computational irreducibility can also be viewed as what leads to the phenomenon of undecidability—originally discovered in the 1930s. Look at this cellular automaton.  [See A New Kind of Science, page 754.]

Enlarge Image

And ask the question: starting from a given initial condition, will the pattern that’s produced eventually die out, or will it just keep going forever? In the case on the top left, just running the cellular automaton tells one that after 36 steps it dies out. But in the second case, it takes 1017 steps to find that out. And in the next two cases, after 10 million steps it’s still not clear what’s going to happen. And the point is that if there’s computational irreducibility, there’s no way to shortcut this evolution—so there’s no finite computation at all that can always figure out what will happen after an infinite time. And that means one has to say that it’s in general formally undecidable what will happen. Well, undecidability has been known about in mathematics and in computer science for quite a long time. But with the Principle of Computational Equivalence one realizes now that it’s also relevant to natural science. If one asks questions about infinite time or infinite size limits the answers can be undecidable. Like whether a body will ever escape in a gravitational three-body problem. Or whether some idealized biological cell line will grow forever or will eventually die out. Or whether there’s a way to arrange some complicated molecule into a crystal below a certain temperature.

Well, there’s another big place where I think computational irreducibility is very important, and that’s in the foundations of mathematics. It may sound kind of obvious, but it’s really a deep observation about mathematics that it’s often hard to do. Yet it’s based on pretty simple axioms. In fact, right here are the ones for essentially all of current mathematics.  [See A New Kind of Science, pages 773–774.]

Enlarge 
                  Image

But even though these axioms are simple, proofs of things like the four-color theorem or Fermat’s Last Theorem are really long. And it turns out that one can think of that as just another case of the phenomenon of computational irreducibility. Let me show you a little about how that works. Here’s an example of a simple proof in mathematics.  [See A New Kind of Science, page 775.]

Enlarge 
                  Image

These are some axioms—that in this case specify equivalences in logic. And this is a proof. It starts from one expression, then keeps on using the axioms, and eventually proves the theorem that that expression at the top is equivalent to the one at the bottom. Well, OK, so as a kind of minimal idealization of math, one can imagine that the axioms just define transformations between strings. So with the axioms at the bottom here, here are proofs of a few theorems.  [See A New Kind of Science, page 776.]

Enlarge 
                  Image

But how long do the proofs need to be? Well, here’s a picture for three axiom systems showing the network of all possible transformations.  [See A New Kind of Science, page 778.]

Enlarge 
                  Image

And the way this works, every possible path through each network corresponds to the proof of some theorem. Well, the point is that the shortest path from one particular string to another may be really long. And that means that the theorem that the strings are equivalent has only a really long proof. Well, when people were thinking about formalizing mathematics a century ago, they kind of assumed that in any given axiom system it’d always eventually be possible to give a proof of whether a particular statement was true or false. So it was a big shock in 1931 when Gödel’s theorem showed that that wasn’t true for Peano arithmetic—the standard formal axiom system for ordinary integer arithmetic. What Gödel actually did, though, was to look at the funky self-referential statement “this statement is unprovable.” Well, just by what it says, this statement fairly obviously can’t be proved true or false. But as such it doesn’t seem like a statement in arithmetic. And Gödel’s real achievement was essentially to show that arithmetic is universal, so that in particular it can encode his funky statement. Well, for the foundations of math Gödel’s theorem was a pretty big deal. But somehow in all these years it’s never seemed too relevant to most of the things working mathematicians deal with. And if you needed something like Gödel’s funky statement to get undecidability, that wouldn’t be surprising. But here’s the thing: the Principle of Computational Equivalence should be general enough to apply to systems in mathematics. And it then says that computational irreducibility—and undecidability—should actually not be rare at all. So where are all these undecidable statements in mathematics? Well, it’s been known for a while that there are integer equations—so-called Diophantine equations—about which there are undecidable statements. Here’s an actual example—a Diophantine equation explicitly set up to emulate rule 110.  [See A New Kind of Science, page 786.]

Enlarge 
                  Image

Well, this is obviously pretty complicated, and not something that would show up every day. But what about simpler Diophantine equations? Here are a bunch.  [See A New Kind of Science, page 790.]

Enlarge 
                  Image

Well, linear Diophantine equations were cracked in antiquity. Quadratic ones around 1800. And so far another kind seems to be cracked roughly every fifty or a hundred years. But I’m guessing that that’s not going to go on. And that actually many of the currently unsolved problems in number theory will turn out to be undecidable. OK, but why has so much math successfully been done without running into undecidability? I think it’s kind of like theoretical physics: it’s tended to stick to places where there’s computational reducibility, and where its methods can make progress. But at least in recent times mathematics has prided itself on somehow being very general. So then why haven’t rule 30 and rule 110 and all the phenomena I’ve found in simple programs shown up? Well, I think part of the reason is that mathematics isn’t really quite as general as advertised. To see what it could be, one can imagine just enumerating possible axiom systems.  [See A New Kind of Science, page 812.]

Enlarge 
                  Image

And for example this shows what theorems are true for a sequence of different axiom systems. It’s like an ultimately desiccated form of mathematics: the axioms go down the left, the theorems go across the top—and every time there’s a theorem that’s true for a particular axiom system there’s a black dot. So is there something special about the actual axiom systems that get used in mathematics? Perhaps something that makes undecidability less rampant? Well, if one looks at axiom systems from textbooks, they’re usually pretty complicated. Here’s logic, for instance.  [See A New Kind of Science, page 773.]

Enlarge 
                  Image

Well, it’s been known for a hundred years that one doesn’t have to have all those operators in there. The single Nand operator—or Sheffer stroke—is enough. But the obvious axiom system with that is still pretty complicated.  [See A New Kind of Science, page 773.]

Enlarge Image

Well, from all the intuition I’d built up about simple programs, I suspected that there should actually be a really simple axiom system for logic—probably with just one axiom. And so I just did a search for it, and lo and behold, I found it.  [See A New Kind of Science, page 773.]

Enlarge 
                  Image

And I can tell you that this is the very simplest possible axiom system for logic. Here’s the proof, by the way—needless to say generated by computer.  [See A New Kind of Science, page 810.]

Enlarge 
                  Image

Well, knowing this I can say that if one just enumerates axiom systems, logic will be about the 50,000th one that one reaches. But what about all the other ones? Well, most of them are perfectly reasonable axiom systems too. They just don’t happen to be known fields of mathematics. And actually I think mathematics as it’s developed has been in a sense tremendously constrained: at some level it’s really still pretty much just various direct generalizations of the arithmetic and geometry that got studied in ancient Babylon. Well, if one starts looking at simple programs and just doing experiments one immediately sees a much wider world—a huge generalization of what mathematics has been so far.  [See A New Kind of Science, page 30.]

Enlarge 
                  Image

OK. Well let me turn now to a somewhat different topic. I want to talk about what the Principle of Computational Equivalence says about sort of a big question: our place in the universe. Well, it’s always been natural for us to think that we as humans are very special. But the history of science keeps on showing us ways in which we’re not. For example, four hundred years ago we found out that our Earth isn’t at a special place in the universe. And a century and a half ago we found out that there wasn’t anything special about the origin of our species. Well, every time we lose something in specialness, science gets more general—because it can drop another footnote that says “except in the case of humans.” But right now we still often think we’re special in our level of complexity or our computational ability. But one of the big statements that the Principle of Computational Equivalence makes is that that isn’t right. It says that there are lots of simple abstract systems—and systems in nature—that are exactly equivalent in terms of their computational sophistication. Well, one sometimes has that impression anyway—for example when one says something like “the weather has a mind of its own.” But what the Principle of Computational Equivalence now says is that, yes, fluid turbulence in the atmosphere will correspond to as sophisticated a computation as anything we do. So we’re not special that way.

Well, one of the things this has a consequence for is the search for extraterrestrial intelligence. There’s sort of been an idea that if we saw a signal that was produced by a sophisticated computation then there’d be no choice but to conclude that it came from a sophisticated extraterrestrial intelligence—some sort of extraterrestrial civilization. Well, the Principle of Computational Equivalence says, no, it’s actually easy to do sophisticated computations—and lots of things in nature do them. It doesn’t take our whole biological development and civilization to manage it. And that’s, by the way, sort of why it’s so hard to distinguish random radio noise from some kind of intelligent compressed encrypted signal. Well, OK, so where does this leave us? It’s sort of interesting to think about how we interact with the ultimate limits of technology. Well, I don’t have any doubt that there’ll be a time—potentially quite soon—when it’ll be possible to capture all the important features of human thinking in pieces of solid-state electronics. And no doubt things will get more and more efficient until everything is on an atomic scale—so that our processes of human thinking are just implemented by individual electrons whizzing around in lumps of something. Well, of course there are electrons whizzing in all sorts of complicated patterns in ordinary pieces of rock too. And what the Principle of Computational Equivalence tells us is that we can’t expect the patterns made by the ones that represent human thinking to be ultimately any more sophisticated than those that occur naturally in something like a rock. There isn’t any sort of abstract essence of human intelligence that one can identify. But what of course is still special about us is all our details and all our history. And in a sense it’s the Principle of Computational Equivalence that shows us that that history can really add up to something. Because if everything was computationally reducible, then in a sense nothing could be achieved by history: we’d always be able to get to the same end point without all that effort.

It’s interesting what the Principle of Computational Equivalence ends up saying. It kind of encapsulates both the great strength and the great weakness of science. Because on one hand it says that all the wonders of our universe can be captured by simple rules. Yet it also says that there’s ultimately no way to know the consequences of these rules, except in effect just to watch and see how they unfold.

Well, it’s remarkable to me what’s grown from those little computer experiments I did in the early 1980s. It’s been very exciting. You know, when I started writing my big book in 1991 I thought it wouldn’t take terribly long. But I kept on looking at different areas. And I just kept on finding all this wonderful stuff. And for ten and a half years—even though I continued to be the CEO of a very active company—I pretty much disappeared. Working incredibly hard on my science.

Well, finally, in May 2002, I finished what I wanted. And I published it all in the book. It’s a big book, with a lot in it. Hundreds and hundreds of specific results. As well as big ideas. And in the notes at the back, technical stuff and lots of historical scholarship and things.

Of course, at the beginning, in true paradigm-shift fashion, there was a certain amount of wild and woolly behavior. But it’s really encouraging how things are developing. Lots of people from lots of different areas—and walks of life—getting involved. Lots of good things getting done. Actually, it’s quite overwhelming. It’s up to a paper a day being published about it. And the pace is getting quicker and quicker.

I think one can see three big threads emerging in what gets called—after the title of the book—”NKS.”

First, a new area of basic science. That one day I think will be like a physics, or a chemistry, or a mathematics. But concerned with understanding what’s out there in the computational world.

That’s a kind of “pure NKS.” And from that pure NKS, one can start to do applications. Take NKS results, and methods, and use them in science, in technology, and elsewhere. Because now one has lots of new raw material for making models of things—including perhaps our whole universe. But also lots of new directions for technology. Because now we have all these new mechanisms—not just gears and wheels, but things like rule 30 too—that give us new ways to approach constructing algorithms, or doing artificial biology, or nanotechnology, or whatever.

There’s also kind of a conceptual direction. Understanding more about the fundamental character of science—and mathematics—and about the place we have in our universe. And in a sense giving a new framework for thinking about things in general—whether for philosophy or business. And giving for example a new foundation for a basic thread in education—sort of an alternative to mathematics. Or creating new forms of art, or music, or architecture. Well, there are just so many possibilities. So many things to be done. So many things to be discovered. So much low-hanging fruit to be picked.

It’s an exciting time. Sort of like a whole intellectual industry starting up. You can see some of it on the website. All sorts of material on the web. Mathematica programs to download. Conferences. A summer school. One big thing we’ve been working on is a systematic atlas of the computational universe—the Atlas of Simple Programs. It’s all built with Mathematica. And it’ll be a terabyte or so about simple programs, and what they do. It’s a bit like a database of astronomy, or organic chemistry, or the human genome. A combination of a map of what’s out there, and what seems interesting.

And there are lots of ways to use it. One can try to classify, and systematize, and do basic research. But one can also do applications. Let’s say one has some phenomenon. And one wants to find a model for it. Well, in general that’s very hard. But here’s the remarkable thing: if there’s a model that’s a simple enough program—then one can just search for it. One can look in the Atlas—look through a trillion possibilities or something—and just find a good model then and there. Nothing like that would be conceivable with traditional kinds of models—they’re much too complicated. But with simple programs it’s a different story. Maybe it’ll even work for finding a model for our whole universe. But let’s say one just has some sequence of integers. Well, then the Atlas could tell you the simplest program that can quickly make that sequence.

But the Atlas isn’t just for science. It can also be mined for technology. Finding easy ways to build complicated structures, for nanotechnology or bioengineering or whatever. Or finding algorithms: for cryptography, or compression, or distributed control, or pattern matching, or whatever. See, most of the algorithms we have today were made by explicit engineering. And at some level they’re based on pretty simple behavior. Like iteration or recursion, corresponding to repetition or nesting. But if you just systematically search the first trillion Turing machines you find lots of other things going on—and potentially lots of useful algorithms. Just out there in the computational universe. Ready to be mined. As soon as one has the tools, and understands the science.

Well, there’s a lot to talk about. A lot that will play out over the course of a great many years. A whole world of NKS ideas and opportunities, just beginning.

Well, I wish I’d been able to tell you more here today. I’ve really only been able to scratch the surface. But I hope I’ve been able to communicate at least a little of what’s in that big book of mine. And what I’ve been so excited about all these years. Thank you all very much.

Publications by Stephen Wolfram »