# A New Kind of Science and the Future of Mathematics, continued

There's no problem with synchronization, because only one thing happens at a time. But how can it possibly be right? I certainly don't have the impression that for example first I'm getting updated, then you're getting updated, and so on. But how would I know? Because until I'm updated I can't tell whether you've been updated or not. And, if one follows this through one realizes that all that we can ever actually know about is a kind of a causal network of what event influences what other event.

Here's an example of how one can go from an underlying sequence of updates to a causal network. And even though each update only affects one cell at a time, the final causal network--the partially ordered set of events-- corresponds to something kind of uniform in spacetime. [See A New Kind of Science, page 489.]

OK, so how might time work in the space networks I talked about? One could imagine that there's an updating rule that just says that whenever there's a piece of network that has a particular form, it should get replaced. Like in these rules. [See A New Kind of Science, page 509.]

But there's an immediate problem: what if there are several places a replacement could be done? Different updating orders in general lead to different causal networks. So one will get a whole tree of possible histories for the universe. So that to say what actually happens in our universe, one will somehow have to say which branch one's on.

But there's another possibility. It turns out that with some underlying rules it actually doesn't matter what order they're applied in: there's what I call causal invariance, and the causal network is always the same.

This is related to so-called confluence or Church-Rosser properties in rewrite systems: it also happens to be related to canonical forms in Mathematica. But anyway, there are conditions about avoiding overlaps, which turn out for example to allow rules based on graphs like these. [See A New Kind of Science, page 515.]

So that whenever one uses just graphs like these, one gets causal invariance, and that means that in a sense there's always just a single thread of time in the universe.

But it turns out that this also has another, rather remarkable consequence: it implies that special relativity must hold. It's a slightly complicated story, but roughly, different updating orders correspond to different spacelike hypersurfaces. [See A New Kind of Science, page 516.]

And that's why--subject to various tricky issues of limits and averages and things--causal invariance implies relativistic invariance.

OK, well that's remarkable. But what about general relativity? I mentioned getting Ricci tensors from space networks. For general relativity, one needs spacetime Ricci tensors. But one can get those from growth rates of volumes of cones in causal networks.

And then one discovers something rather remarkable. If there's microscopic randomness, and if things stay finite dimensional, then in a suitable limit, and with various other conditions, one can show that the spacetime Ricci tensor must vanish. Which is equivalent to saying that the Einstein equations hold.

Well, that's rather remarkable. To have started with so little. But to have been able to derive something sophisticated like that.

But, OK, what about matter, and particles, like electrons and photons? Remember that all we have in the universe is space. So what are particles? Well, here's how things can work in something like a cellular automaton. [See A New Kind of Science, page 229.]

This starts off randomly. But quickly organizes into a few persistent localized structures. That act just like particles. Like here are a couple of collisions between them. [See A New Kind of Science, page 295.]

Two particles come in, and interact, and then a whole bunch of particles come out. Almost like an explicit version of a Feynman diagram.

Well, OK, so how does something like this work in a network? Here's a very simple example. Imagine we have rules that always preserve planarity. Then imagine that we introduce some lumps of nonplanarity. Well, those end up being conserved until they collide, much like particles. [See A New Kind of Science, page 527.]

And in this simple example we know a lot about how to classify the possible particle-like objects. Kuratowski's theorem tells us that in our networks single lumps of nonplanarity must be like K33s. But the actual nonplanarity is sort of hard to localize. A bit like in quantum mechanics.

Well, in more realistic cases it's much more complicated, and there's lots of unknown graph theory. But there are signs of all sorts of other features of quantum field theory, and gauge invariance, and so on.

It all looks extremely promising. It really seems like out there in the computational universe there may be a simple program that's a precise version of our universe. And if nothing else, the process of trying to find it is bringing up some pretty interesting issues linking geometry, computation, graph theory, recursion theory and so on.

Well, OK. I want to come back now to what really launched everything I've been talking about: the discovery that simple programs--like rule 30--can make immense complexity. So why does that happen? What's the fundamental reason? [See A New Kind of Science, page 30.]

Well, to answer that one needs a somewhat new conceptual framework. And the basis is to think about all processes as computations. The initial conditions are the input. And the behavior that's generated is the output.

Well, sometimes the computations are ones that we--kind of--immediately know the point of. Like here's a cellular automaton that computes the square of any number. [See A New Kind of Science, page 639.]

You give it a block of n cells, and it generates a block of n^2 cells. And here's a cellular automaton that generates the primes. [See A New Kind of Science, page 640.]

But actually any cellular automaton can be thought of as doing a computation. It just isn't necessarily a computation that we know the point of beforehand.

OK, so we have all sorts of systems and they do all sorts of computations. But what general statements can we make? How do all these computations compare?

Well, we might have thought that to do a different kind of computation we'd always need a different kind of system. But the remarkable idea that's now about 70 years old is that no, that's not true. Instead, it's possible to have a universal system that can do any computation if fed the right input. And of course that's been a pretty important idea. Because it's the idea that makes software possible, and it's the idea that launched the computer revolution.

Well, here's a universal cellular automaton. [See A New Kind of Science, page 645.]

That with different initial conditions can be made to act like any other cellular automaton. So here it's behaving like rule 254, which makes a simple uniform pattern. [See A New Kind of Science, page 646.]

Here's it behaving like rule 90. [See A New Kind of Science, page 647.]

And here it's behaving like rule 30. And remember, each of these pictures is of the same universal cellular automaton, with the same underlying rules. But what's happening is that by giving different initial conditions, we're effectively programming the universal cellular automaton to emulate other cellular automata. And in fact it can emulate absolutely any other cellular automaton, with rules of any size.

One might have thought that a system would only be able to emulate systems that were somehow simpler than itself. But universality says that there are fixed systems that can emulate any system, however complicated. So in a sense, once one's got universality, one's maxed out from a computational point of view-- one can do essentially any computation, however sophisticated.

OK, so what systems are like rule 30? Or systems we see in nature? How sophisticated are the computations that they're doing? [See A New Kind of Science, page 30.]

Well, I spent a long time thinking about this and accumulating all sorts of evidence. And I ended up concluding something that at first seems pretty surprising. I call it the Principle of Computational Equivalence. It's a very general principle. And in it's roughest form what it says is this: that essentially any time the behavior of a system looks complex, it will correspond to a computation of exactly equivalent sophistication. If we see repetitive or nested behavior then it's pretty obvious that it corresponds to a simple computation. [See A New Kind of Science, page 58.]

But what the Principle of Computational Equivalence says is that when we don't see regularities like those, we're almost always seeing a process that's in a sense maximally computationally sophisticated.

Now at first that's pretty surprising. Because we might have thought that the sophistication of the computations that get done would depend on the sophistication of the rules that got put in. But the Principle of Computational Equivalence says it doesn't. And that immediately gives us a prediction. It says that even though their rules are extremely simple systems like rule 30 should be computationally universal. [See A New Kind of Science, page 30.]

Well, normally we'd imagine that to achieve something as sophisticated as computation universality we'd somehow need sophisticated underlying rules. And certainly actual computers have CPU chips with millions of components, and so on. But the Principle of Computational Equivalence says you don't need all of that. It says that even cellular automata with very simple rules should be universal.

Well, here's one them. It's the hundred-and-tenth elementary cellular automaton, rule 110. [See A New Kind of Science, page 229.]

It's got a very simple rule. But as you can see, it does some fairly complicated things. It's got lots of little structures running around doing things. But can one really assemble them to get something one can see is universal? Well, after a lot of painstaking work, one gets the result: that indeed rule 110 is universal!

Well, that's just what the Principle of Computational Equivalence said should be true. But it's really a remarkable thing. Because it means that this little rule

can in effect produce behavior that's as complex as any system. One doesn't need anything like a whole computer CPU to do universal computation. One just needs this little rule.

And that has some very important consequences, for example in thinking about nature. Because we wouldn't expect to find whole computer CPUs just lying around in nature, but we definitely can expect to find things like rule 110. And that means for example that lots of everyday systems are likely to be universal. By the way, in the past this was the simplest known universal Turing machine.

But actually this much simpler Turing machine is also universal. [See A New Kind of Science, page 707.]

And I suspect this is the very simplest possible one that's universal. [See A New Kind of Science, page 709.]