Well, there's a lot to say about what kind of a thing the Principle of Computational Equivalence is, and what it means. One thing it does is to make Church's thesis definite by saying that there really is a hard upper limit on the computations that can be done in our universe. But the place where it really starts to get teeth is when it says that not only is there an upper limit--but that limit is actually reached most of the time. The very simplest rules will just give simple behavior--say repetitive or nested. But what the Principle of Computational Equivalence says is that if one looks at just a few more rules, one will suddenly cross a threshold--and end up with maximal computational sophistication.
Actually the principle goes even further than that. Normally with universal computation one imagines being able to set up whatever initial conditions one wants. But the Principle of Computational Equivalence says that that's not necessary. It says that even when the initial conditions are simple, there'll still usually be maximally sophisticated computation going on. OK, so what does this all mean? Well, first it gives us a way to answer the original question of how something like rule 30 manages to seem so complex. [See A New Kind of Science, page 30.]
The point is that there's always a competition between an observer and a system they're observing. And if the observer is somehow computationally more sophisticated than the system, then they can in a sense decode what the system is doing--so it'll look simple to them. But what the Principle of Computational Equivalence says is that the observer will usually be exactly computationally equivalent to the system they're observing. So inevitably the behavior of the system will seem to them complex.
Well, a related consequence of the Principle of Computational Equivalence is a phenomenon I call computational irreducibility.
Let's say you know the rules and the initial conditions for a system. Well, then you can certainly work out what the system will do just by explicitly running it. But the question is whether you can somehow shortcut that process. Can you for example just work out a formula for what will happen?
That kind of computational reducibility is at the core of most traditional theoretical science. If you want to work out where an Earth in an elliptical orbit will be a million years from now, you don't have to trace a million orbits; you just have to plug a number into a formula. But the problem is: what happens when the behavior is more complex? It's easy to shortcut repetitive behavior, and fairly easy to shortcut nested behavior.
But what about a case like this? There's certainly no obvious way to shortcut this, and in fact I think it's computationally irreducible: there's essentially no way to work out what the system will do by any procedure that takes less computational effort than just running the system and seeing what happens. [See A New Kind of Science, page 740.]
In traditional theoretical science, there's sort of been an idealization made that the observer is infinitely computationally powerful relative to the system they're observing. But the point is that when there's complex behavior, the Principle of Computational Equivalence says that instead the system is just as computationally sophisticated as the observer. And that's what leads to computational irreducibility. And that's in a sense why traditional theoretical science hasn't been able to make more progress when one sees complexity. There are always pockets of reducibility--like the kinds that can be represented by all those special functions of mathematical physics on our Functions Site -- but there's always a core of computational irreducibility.
Well, I think computational irreducibility is a pretty important phenomenon. With some pretty interesting philosophical as well as scientific implications. It's also in a sense what's inside the phenomenon of undecidability. Look at this cellular automaton. [See A New Kind of Science, page 754.]
And ask the question: starting from a given initial condition, will the pattern that's produced eventually die out, or will it just keep going forever? In the first case, running the cellular automaton tells one that after 36 steps it dies out. In the next case, though, it takes 1017 steps to find that out. And in the next two cases, even after 10 million steps it's still not clear what's going to happen.
And the point is that if there's computational irreducibility, there's no way to shortcut this evolution--so no finite computation can always figure out what will happen after an infinite time. And that means one has to say that it's in general formally undecidable.
Well, undecidability has been known about in mathematics and in computer science since the 1930s. But what we're seeing is that it's also relevant to natural science.
If one asks about infinite time or infinite size limits the answers can be undecidable. Like whether a body will ever escape in a three-body problem. [See A New Kind of Science, page 973.]
Or whether there's a way to arrange some complicated molecule into a crystal below a certain temperature. Or whether some idealized biological cell line will grow forever or will eventually die out.
Well, there's another big place where I think computational irreducibility is very important--and that's in the foundations of mathematics. It may sound kind of obvious, but it's really a deep fact that mathematics is often hard to do. Yet it's based on pretty simple axioms. In fact, right here are ones for essentially all of current mathematics--from logic, to algebra, geometry, and set theory. [See A New Kind of Science, page 773.]
But even though these axioms are all quite simple, our proofs of things like the Four-Color Theorem or Fermat's Last Theorem are really long. And one can think of that as another example of computational irreducibility.
Here's an example of a simple proof. At the top are some axioms--in this case specifying equivalences in logic. And then here's a proof. It starts from one expression, then keeps on using the axioms, and eventually proves the theorem that the expression at the top is equivalent to the one at the bottom. [See A New Kind of Science, page 775.]
Well, OK, just like we made minimal models of natural systems like snowflakes, we can also think about doing that for math. So as a kind of minimal idealization of math, one can imagine that the axioms just define transformations between strings. So with the axioms at the bottom here, here are proofs of a few theorems. [See A New Kind of Science, page 776.]
But how long do the proofs need to be? Well, here's a picture for three axiom systems, showing the network of all possible transformations. And the way this works, each possible path through the network corresponds to the proof of some theorem. [See A New Kind of Science, page 778.]
Well, the point is that the shortest path from one particular string to another may be really long. And that means that the theorem that the strings are equivalent has only a really long proof.
Well, when people were thinking about formalizing mathematics a century ago, they kind of assumed that in any reasonable axiom system it'd always eventually be possible to give a proof of whether a particular statement was true or false. So it was a big shock in 1931 when Godel's theorem showed that that wasn't true for Peano arithmetic.
What Godel actually did, though, was to look at the statement "this statement is unprovable". Well, just by what it says, the statement obviously can't consistently be proved true or false. But as such it doesn't seem like a statement in arithmetic. And Godel's real achievement was essentially to show that arithmetic is universal, so that in particular it can encode his statement.
Well, for the foundations of math, Godel's theorem was a pretty big deal. But somehow in all these years it's never seemed too relevant to most of the things working mathematicians deal with. And if you really needed something as funky as Godel's statement to get undecidability that wouldn't be surprising.
But here's the thing: the Principle of Computational Equivalence should also apply to systems in mathematics. And it then says that computational irreducibility--and undecidability--should actually not be rare at all.
So where are all these undecidable statements in mathematics? Well, one knows that there are Diophantine equations about which there are undecidable statements. Here's an explicit example --a Diophantine equation set up to emulate rule 110. [See A New Kind of Science, page 786.]
Well, this is obviously pretty complicated, and not something that would show up everyday. But what about simpler Diophantine equations? Here are a bunch. Well, linear Diophantine equations were cracked in antiquity. Quadratic ones two centuries ago. And so far another kind seems to be cracked roughly every fifty or a hundred years. But I'm guessing that that's not going to go on. And that actually many of the currently unsolved problems in number theory will turn out to be undecidable. [See A New Kind of Science, page 790.]
OK, but why has so much math successfully been done without running into undecidability? I think it's kind of like theoretical physics: it's tended to stick to places where there's computational reducibility, and where its methods can make progress.
But at least in recent times mathematics has sometimes prided itself on somehow being as general as possible. So then why haven't rule 30 and rule 110 and all the phenomena I've found in simple programs showed up?
Well, I think part of the reason is that mathematics isn't really quite as general as one might think. As one way to see what it could be, one can just imagine enumerating possible axiom systems. And for example this shows what theorems are true for a sequence of different axiom systems. It's like an ultimately desiccated form of mathematics: the axioms go down the left, the theorems go across the top--and every time there's a theorem that's true for a particular axiom system there's a black dot. [See A New Kind of Science, page 812.]
So is there something special about the actual axiom systems that get used in mathematics? Perhaps something that makes undecidability less rampant? Well, if one looks at axiom systems from textbooks, they're usually pretty complicated. Here's propositional logic, for instance. [See A New Kind of Science, page 773.]
Well, actually, it's been known for a hundred years that one doesn't have to have all those operators in there. The single NAND operator--or Sheffer stroke--is enough. But the obvious axiom system with that is still pretty complicated. [See A New Kind of Science, page 773.]
Well, from my intuition from the world of simple programs, I suspected that there should actually be a really simple axiom system for logic--probably with just one axiom. And so I just did a systematic search, and lo and behold, I found one. [See A New Kind of Science, page 773.]
And I can tell you that this is the very simplest possible axiom system for logic. Well, knowing this I can say that if one just enumerates axiom systems, logic will be about the 50,000th one that one reaches. [See A New Kind of Science, page 773.]
But what about all the other ones? Well, most of them are perfectly reasonable axiom systems too. They just don't happen to be known fields of mathematics.
But why not? Well, actually what I think is that mathematics has been in a sense tremendously constrained in the way it's developed. And in fact at some level it's really still just about various direct generalizations of the arithmetic and geometry that got studied in ancient Babylon. It's explored only a tiny part of what's out there in the general computational world.
But why really is that? I think a lot of it is the way generalization has been done in mathematics. Usually one has some object, that satisfies certain theorems. And then one generalizes by looking for a larger class of objects that satisfy the theorems.
Well, this is a good strategy if one wants to be sure one can still prove theorems. And probably it's why undecidability hasn't been more common in math research. But it means that mathematics hasn't particularly sampled what's out there in the general world of abstract systems.
In natural science we're used to just doing experiments to find out what's there. But in mathematics, we haven't been. So we just haven't known about all this stuff.
But OK, is the stuff actually "interesting"? Well, it seems important for understanding nature. And for making technology.
But what about for math? I mean, if we're just handed an axiom system, what would make it "math-like"?
Maybe it's like defining life. It's easy to tell living systems on Earth, because they all have common historical details. But abstract definitions--from thermodynamics, self-reproduction, etc.--don't really seem to work.
So what about math? Is there a definition that's more than historical? That's one of those foundational questions with very practical consequences. Because in Mathematica we'd really like to make a sort of intermediate layer between the decision procedures for definite mathematical theories (yes, that's new Version 5 stuff), and the general symbolic language underneath.
Well, so if we look at existing math axiom systems, can we tell what's special about them? There doesn't seem to be anything in the distribution of theorems they make true. Or in the lengths of proofs one finds in them.
One might think that what's special is that the axioms are actually "about things". That somehow the constraints they define force one to be talking about some definite kind of object. But actually there are often a lot of "models"that are consistent with a given axiom system. So there's nothing special there. [See A New Kind of Science, page 796.]
In an existing field of mathematics it's interesting to do a sort of empirical metamathematics. Looking at the dependency network of known theorems. Like here's the one for Euclid's Elements. The longest proof is 30 steps--for the theorem that there are five Platonic solids. [See A New Kind of Science, page 1176.]
Well, one can ask many global questions about theorem networks. Like geometrical ones. And one can also try to find precise graph properties that capture our general notions of powerful theorems, or surprising theorems--or interesting theorems.
Here are the first few hundreds theorems of propositional logic, written out in a straightforward order. [See A New Kind of Science, page 817.]
The ones that are highlighted are the ones considered interesting enough to be given names in typical textbooks. Well, here's a remarkable fact. You can predict which ones those will be. It turns out they are exactly those theorems that can't be deduced from ones earlier in the list. They're the first ones that in a sense give new pieces of information.
Well, so that's a criterion for interestingness. And one can apply it to any axiom system at all. And sort of automatically begin to build up traditional math-like structure. But without the history.
Well, OK, so what's going to happen with all the stuff I've talked about here today? The big book I spent a decade writing has lots and lots of detailed results in. That'll certainly take quite a while to absorb.
But I think three big things will emerge. First, a new kind of basic science. Like a physics, or a chemistry. Or even a mathematics. But concerned with what's out there in the computational universe. Second, a whole bunch of applications. To science. To technology. To other areas. And third, a new way of thinking. That's relevant for many things. Including for example education.
You know, it's been exciting in the year and a half since my book came out to watch some of this start to unfold. It's been tremendous to see how many talented people want to get involved.
It's a challenge to set up the infrastructure for it all. We've been building a growing website.
And making available programs
We had a very successful NKS 2003 conference. And we're having an NKS 2004 conference in Boston in April. We had a summer school last summer. And we're having another one this year.
There's a lot to do. And I've started writing down some of the obvious open problems and projects. It's averaging more than one per page of the book. So there'll be more than a thousand in the end.
Really a lot is happening. Lots of papers being published. Particularly in what one might call Applied NKS.
In Pure NKS, a big initiative is the atlas we're building. A terabyte or so of data about what's out there in the computational universe. That one can consult and search to find models, or algorithms, or for example raw material for math. Like here's a trip into a tiny corner of the latest development version of the Atlas. We could look at the live version later. We're looking at some properties of rule 30. Simple initial conditions.
Single black cell.
State transition graphs.
And a table of the one for all rules.
Can you explain those pictures? In this particular corner, there's some nice combinatorics, and group theory, involved. And there's really a lot that can be done in pure NKS with existing mathematical methods. And we're counting on mathematicians to get involved.
One of the exciting things about NKS is how broad the base of people is who can get involved. If they understand the methodology of doing good computer experiments, it's perfectly realistic for a high school student to discover something new and interesting. But there are also plenty of things that reach the frontiers of math.
I wrote my book to be very accessible. Though with lots of definite stuff, particularly in the notes. And I think that worked out well. Because it's let a tremendous range of people read it. And begin to get involved with it.
And the result right now feels a little like an intellectual analog of a new industry starting up. One of those rare exciting times when a big new direction is just getting defined.
To cover what's in the book in a serious way is probably a several-semester undertaking. So I've only been able to scratch the surface and sketch a few things here today. But I hope I've been able to give you at least some idea of what's in that big book of mine. And what at least I've been so excited about all these years. Thank you.