The Generation of Form in A New Kind of Science: Implications for Architecture and More

Variants of this talk were given at several venues in 2005: • Yale University, School of Architecture (2/14/05) • University of Pennsylvania, School of Design (2/15/05) • Princeton University, School of Architecture (2/16/05) • Pratt Institute, School of Architecture (2/17/05) • Massachusetts Institute of Technology, School of Architecture (10/25/05)

I feel a certain affinity for architects and architecture. Not just because I’ve paid for a certain amount of building in my time. But really because for 25 years I’ve spent a lot of time doing something that’s rather an architecture type of pursuit. I’ve been doing software design, and in particular computer language design. And at least I imagine it’s sort of a rather abstract analog of designing a building: you have to create a structure that works, and that people can work well in.

The way I do it, you have to think very clearly to dig down to find the fundamental principles. And then build everything up from them—and then make sure the roof doesn’t leak.

Well, I’m happy to say that the tools I’ve built get used these days very successfully by several million people.

But, you know, one of my main motivations for building Mathematica and everything was in a sense quite personal: I wanted to use it myself. And it’s been wonderful. It’s been a bit like having the first telescope or the first microscope. You can just point it somewhere and see all this stuff that’s never been seen before. It’s great.

Well, I want to tell you a little bit about what I’ve seen. And about what it means.

You know, I’ve been working on all this for nearly 25 years. But in many ways it’s still very early days.

I think we’ve got hold of a fundamentally new paradigm. And it’s going to take a long time for it to get fully absorbed. But I think it’s going to have a pretty major impact on the science, and the technology—and perhaps even the art—of the coming century.

Well, let me start off by telling you a little about how I got into all of this. Back in the late 1970s I was a young physicist, mostly working on particle physics. But I also did some work in cosmology. And from that I got interested in the question of how structures emerge in our universe—from galaxies on down.

And I quickly realized that there was actually a quite general question: how does anything complicated get produced in nature? I mean, when we look at the natural world, it’s full of complex forms and complex behavior. It’s not just circles and squares and repetitive motion. But where does all this complexity come from? What’s its fundamental origin?

Well, if one wants to ask a fundamental question like that about nature, it’s been sort of a defining feature of the exact sciences for perhaps 300 years that one should use mathematics and mathematical equations. Because, to use Galileo’s words, the book of nature “is written in the language of mathematics.”

Well, that’s an idea that really transformed science 300 years ago. And it certainly worked well for Newton and friends in figuring out the orbits of comets—and for lots and lots of things since then.

But somehow for the more complex things one sees in nature, it’s never worked out very well. And what I think is that really one needs a new paradigm—a new kind of science.

Well, when I was first thinking about this back in 1981, it so happened that I’d just developed a big software system—that was in some ways a forerunner of Mathematica. And at the core of that system was a computer language. And to design that computer language what I’d done was to try to think about all the computations people might want to do—and then to try to identify primitives that could be strung together to build up those computations. Well, that had worked out fairly well—and in a much evolved form I think it’s worked out spectacularly well in Mathematica. And back in 1981 that gave me the idea that perhaps—just as I’d been able to find primitives for computations people want to do—I might somehow also be able to find primitives for what nature does.

And the crucial thing I realized is that those primitives don’t have to be based on traditional constructs from mathematics.

I mean, if one’s going to be able to do theoretical science, one’s going to have to assume that nature follows some kind of definite rules. But why should those rules involve only the kinds of constructs that happen to have been invented in human mathematics—numbers, exponentials, calculus, and so on? Can’t the rules somehow be more general?

Well, in the past there wouldn’t really have been any systematic way to think about such things. But now we have computers—whose programs in effect implement arbitrarily general rules. And the idea I had was that perhaps the kinds of rules that can be embodied in programs might actually be what nature is using.

But so what sorts of programs might be relevant? From knowing traditional mathematical science, I assumed that they’d have to be at least somewhat complicated. But still, I figured I should at least start by looking at very simple programs.

In practical computing we’re used to long, complicated programs specifically set up for particular tasks. But what I wanted to look at were the very simplest, shortest programs.

What do programs like that do? Well, one can have various theories about it. But really the way to find out is to do an experiment. It’s sort of the simplest possible computer experiment: run the simplest programs, and see what they do.

Well, back in 1981 I came up with particular kinds of programs that are very convenient to look at visually. And actually they’re still my favorite kind.

They’re called one-dimensional cellular automata. Here’s a simple example. It’s pretty easy to understand.

One starts with a row of cells, each either black or white.

Enlarge Image

Then the cellular automaton evolves down the page. And at each step the color of each cell is determined by a definite rule from the color of the cell and its neighbors on the step before.

Enlarge

Play Animation

Well, in this particular case the rule is really simple. It just says that a cell will be black whenever it or its neighbors were black on the row before. And what happens is that we just get a simple uniform black pattern.

Enlarge Image

Well, we can use that icon at the bottom to represent the rule—the program—that we’re using. The way the icon works is that it just lists what each of the eight possible arrangements of a cell and its neighbors can be. And then it says for each arrangement what the color of the cell on the next step will be in that case.

OK, so what happens if we change the rule a bit?

Enlarge Image

Well now, instead of getting just a uniform black pattern, we get a checkerboard.

But so far none of this is terribly surprising. We’re using very simple rules, and they’re making very simple patterns.

OK, let’s try another rule.

Enlarge Image

It’s the same setup as before. But now we don’t seem to be getting any kind of simple repeating pattern.

Let’s run it a bit longer.

Enlarge Image

Well, it gets pretty intricate.

But we can see that it’s actually very regular. Just formed from identical pieces nested inside each other. A self-similar or fractal pattern.

Well, one might think that if one has a simple rule and one starts from just a single black cell, then one would inevitably have to get a pattern that’s ultimately very regular—like this. At least, back in 1982 that’s what I assumed was true.

But one day I decided to try a very systematic experiment, and just look at every one of the possible rules.

Well, there turn out to be 256 of these rules. And one can number them from 0 to 255. So here’s a picture of what the first quarter of them do.

Enlarge Image

The first thing one notices is that a lot of different things happen. And just changing from one rule to the next the behavior can be completely different.

Most of the rules, though, are doing fairly simple things. Single stripes. Or uniform patterns. Or sometimes nested patterns.

But take a look at rule 30. What’s it doing? Here’s a bigger picture.

Enlarge Image

There doesn’t seem to be anything repetitive—or even nested—about this. Let’s run it for more steps.

Enlarge Image

Well, now we can see that there’s a bit of regularity over on the left. But otherwise, this looks really complicated—actually kind of random.

Well, when I first saw this, I thought it might just be a problem with our visual system: that really there were regularities, but we just couldn’t see them. So I ran all sorts of elaborate mathematical and statistical tests. And what I found was that no, so far as I could tell, something like the center column of cells here really is perfectly random. In fact, it’s a terrific practical generator of random sequences.

Well, this is rather amazing. We have a very simple rule, and we’re starting off from a single black cell. But what we’re getting out is an incredibly complicated pattern—that seems in many ways random.

It just doesn’t seem right. We’ve put so little in, yet we’re getting so much out. It’s not what our ordinary intuition says should happen. I mean, in our everyday experience and, say, in doing engineering what we’re used to is that to make something complicated, we somehow have to start off with complicated plans or use complicated rules. But what we’re seeing here is that actually even extremely simple rules can produce incredibly complicated behavior.

Well, it took me years to come to terms with this phenomenon. And in fact, it’s gradually overturned almost everything I thought I knew about the foundations of science, and quite a bit besides. And it’s what’s led me to spend 20 years building a new intellectual structure—really a new kind of science.

Well, it’s pretty common in history that what ultimately drives the advance of science is new technology. And that’s what’s happened here. It’s really been very exciting. I guess it must be a little like when telescopes or microscopes were first invented. One could just point them somewhere, and suddenly see a whole world of new phenomena. Like moons of Jupiter, or microscopic creatures in a drop of pond water.

Well now, with Mathematica, one can just point it somewhere—and suddenly see all this stuff out there in the computational universe.

Play Animation

Well, let me try to give you a feeling for what it’s like out there on the front lines. Let’s start up Mathematica. Of course I always use the very latest experimental, development, version. Which doesn’t really exist yet. And probably this one is from last night’s nightly build…

Well, as I’m sure most of you know well, it of course does lots and lots of ordinary math. It has every possible thing you need to calculate stuff.

In[1]:=2+2

Out[1]= 4

Or a bigger number.

In[2]:=1000!

Out[2]=
Enlarge Image

And by now it’s become the world’s largest repository of interconnected algorithms of all types. A giant interconnected web of algorithms. Pick any field that involves calculating anything: it’s used by the leading people in the field. Like here’s a bit of calculus.

In[3]:=Integrate[1/(x^10 - 1), x]

Out[3]=
Enlarge Image

Let’s make a picture.

In[4]:=Plot[Re[Zeta[1/2 + I x]], {x,

Out[4]=Answer

In[5]:=Plot3D[Sin[x y], {x, 0, 3}, {y, 0, 3}]

Out[5]=Answer

Of course, the core of Mathematica is its symbolic language. There’s a huge amount to say about that. But not right now.

Let’s go back to cellular automata here. And let’s start using them live to explore the computational universe. Here’s my rule 30 cellular automaton:

In[7]:=Math_7_in.gif

Out[7]=Answer

Remember that its rule just involves two colors. Let’s make a picture with purple and orange:

In[8]:=Math_8_in.gif

Out[8]=Answer

But one can also have three colors. With two colors, there are 2^2^3 rules. With three colors, there are 3^3^3 rules altogether. That’s seven trillion.

That’s a lot.

Here are some I remember:

In[9]:=Math_9_in.gif

Out[9]=Answer

In[10]:=Math_10_in.gif

Out[10]=Answer

But let’s just start exploring.

In[11]:=Table[ArrayPlot[CellularAutomaton[{10^12+i, 3}, {{1},0}, {50,All}],PlotLabel->i],{i,20}]

Out[11]=
Enlarge Image

Sometimes I feel a bit like a naturalist, wandering the computational world and finding all these strange and wonderful creatures. It’s quite amazing what’s out there. If you start looking at more details—what happens with different starting conditions and so on—one could spend years just studying one of these. But even of this specific kind of rule there are trillions of them out there in the computational universe.

OK, but what about other kinds of rules? Well, here’s an obvious thing one can do. Instead of having the cells in the cellular automaton be in a line, have them be on a grid. In 2D. Here’s an example of what happens then.

Enlarge

Play Animation

All sorts of remarkable things can happen. Like here’s a rule that grows an almost perfect circle on a square grid.

Enlarge Image

And here’s one that grows a weird shape.

Enlarge Image

For a 1D cellular automaton, being able to see the time history explicitly down the page is really useful. In 2D the analog is a 3D history.

Enlarge Image

The structures can be really interesting.

Enlarge Image

But if one makes a slice through them, one can see very much the same kind of thing as in 1D.

Enlarge Image

One can also make 3D cellular automata. Here are the structures one gets from a few of the very simplest rules.

Enlarge Image

Well, OK. So far everything we’ve looked at has been a cellular automaton. So is there something special about them? What about other programs out there in the computational universe?

Like here’s a Turing machine for example.

Enlarge Image

As abstract theoretical constructs, these things are famous in theoretical computer science. But here’s a concrete example of one.

It’s got a row of cells, like a cellular automaton. But unlike a cellular automaton, only one cell gets updated at each step—but that cell moves around.

Well, one can start enumerating Turing machines just like cellular automata. And then one finds that the first quite a few Turing machines only do these kinds of things.

Enlarge Image

But if one keeps going—still with very simple rules—then suddenly one sees this.

Enlarge Image

The same kind of complexity that we saw in cellular automata.


OK, so what about other kinds of programs? Well, I’ve looked at many many different kinds. Many different corners of the computational universe. For different purposes. And with different connections to other things. Like here are sequential substitution systems that rewrite strings like an iterated text editor.

Enlarge Image

Or register machines—kind of like minimal idealizations of machine language.

Enlarge Image

Or symbolic systems—like generalizations of combinators or kind of minimal idealizations of Mathematica.

Enlarge Image

Every time one sees the same thing: even with very simple rules—that one can represent with a very simple icon—one can get extremely complicated behavior.

One doesn’t even need rules that specify an explicit process of evolution: constraints work too.

Enlarge Image

Here are possible patterns forced by the simplest type of these tiling constraints.

Enlarge Image

And here is the first case in which one’s forced to have a non-periodic pattern; it’s kind of the minimal tiling that’s analogous to a Penrose tiling.

Enlarge Image

And here are constraints that give a kind of “crystal” that’s forced to have a rule 30 pattern.

Enlarge Image

Wherever one looks, it’s always the same basic thing: incredibly simple rules can give incredibly complicated behavior. It seems to be a very robust—and very general—phenomenon.

Enlarge Image

But how come something that fundamental hadn’t been known for ages? Well, partly it’s because one can only easily see it by doing lots of computer experiments—and it’s only with computers, and with Mathematica, that those have become easy to do. But more important is that with our ordinary intuition there just didn’t seem to be any reason to try the experiments; it seemed so obvious they wouldn’t show anything interesting.

Well now that one’s made the clear discovery that simple programs—simple rules—can produce complicated behavior, one can go back and find all sorts of hints about it from really long ago. Nearly 2500 years ago the Greeks looked, for example, at prime numbers. And there’s a fairly simple rule—a fairly simple program—for generating all the primes. But the sequence of primes, once generated, looks remarkably irregular and complicated.

Enlarge Image

There’s also a fairly simple rule for generating the digits of pi. But once generated, that sequence looks to us completely random.

Enlarge Image

And actually, there are lots of cases of this kind of thing with numbers.

Here’s what happens if you just write out successive powers of 3 in base 2.

Enlarge Image

Kind of a minimal version of a linear congruential random number generator. And already amazingly complicated. You can also make very complicated sequences from simple integer recurrences.

Enlarge Image

It’s a little trickier than with things like cellular automata, but there are ways to explicitly enumerate functions of integers—so one can explore a whole space of them. And so here for example is the very simplest so-called primitive recursive function that has complex behavior.

Enlarge Image

Well, what about other systems based on numbers? In traditional mathematical science, the very favorite kinds of systems used as models of things are so-called partial differential equations. And they’re different from everything I’ve been talking about here, because they don’t have discrete elements; they’re continuous.

Well, here are most of the standard PDEs that people usually study.

Enlarge Image

But what if we just explore the space of all possible PDEs?

Well, with Mathematica it’s easy to write down possible symbolic equations, and start searching through them. And when I did that, I quickly found these.

Enlarge Image

They’re just simple nonlinear PDEs. But even with very simple initial conditions, they end up doing all sorts of complicated things. Well, actually, it’s a little hard to tell precisely what they do. Because with a continuous system like this there’s always a problem. Because you end up having to discretize it, and without already more or less knowing the answer, it’s hard to tell whether what you’re seeing is something real.

But in a discrete system like rule 30, the bits are the bits, and one can tell one’s seeing a real phenomenon.

Enlarge Image

And actually, rule 30 is so simple to set up, that young kids can certainly do it—and there’s probably no reason the Babylonians couldn’t have done it. And I’ve sometimes wondered whether perhaps they did—and whether someday some ancient mosaic of rule 30 might be unearthed. But if you look at the motifs of ornamental art through the ages, you don’t see any rule 30s. It’s interesting to track down the details of these; I discuss them in the notes at the back of my book.

Enlarge Image

And for example, so far as I can tell, the earliest time a good nested structure shows up is 1226.

Enlarge Image

It’s in mosaics made by the Cosmati. But actually, it shows up only for a few years. Then disappears for hundreds of years. I was really excited when I found a picture of this from around 1300.

Enlarge Image

But it turns out it’s just a verse from the Koran written in square Kufic script, like this.

Enlarge Image

There’ve been a lot of near misses over the centuries. But I don’t think anything quite like rule 30 was actually ever created.

Enlarge Image

Because if rule 30 had been known in antiquity, I suspect a lot of ideas about science and nature would have developed rather differently.

Because as it is, it’s always seemed like a big mystery how nature could manage—apparently so effortlessly—to produce so much that seems to us so complex. It’s like nature has some special secret that allows it to make things that are much more complex than we as humans normally build. And often it’s seemed like that must be evidence that there’s something some force beyond human intelligence involved.

But once one’s seen rule 30, it suggests a very different explanation. It suggests that all it takes is for things in nature to follow the rules of typical simple programs. And then it’s almost inevitable that—like in the case of rule 30—their behavior can be highly complex.

The way we as humans are used to doing engineering and to building things, we tend to operate under the constraint that we have to foresee what the things we’re building are going to do. And that means that we’ve ended up being forced to use only a very special set of programs—from a very special corner of the computational universe—that happen always to have simple foreseeable behavior. But the point is that nature is presumably under no such constraint. So that means that there’s nothing wrong with it using something like rule 30—and that way inevitably producing all sorts of complexity.

Well, there’s a huge amount one can do by abstractly exploring the computational universe. A huge intellectual structure to build—a bit like a pure mathematics. And an important source of ideas and raw material for technology—and for things like architecture.

But for natural science the crucial thing is that it gives us new kinds of models—and new fundamental mechanisms for behavior. And it gives us the intuition that in the computational universe there really can be extremely simple rules that are responsible for complex forms and phenomena we see.

Let’s take an example: snowflakes.

Enlarge Image

How do they get so intricate? What’s the fundamental mechanism?

Well, crystals grow by starting from a seed, then successively adding pieces of solid. And one can try to capture that with a simple two-dimensional cellular automaton. Imagine a grid, where every cell either has solid in it or not. Then start from a seed and have a rule that says solid will be added at any cell that’s adjacent to one that’s already solid. Here’s what one gets.

Enlarge

Play Animation

It’s an ordinary-looking faceted crystal, here on a square grid. One can do the same thing on a hexagonal grid.

Enlarge

Play Animation

But for snowflakes there’s an important effect missing here. When a piece of ice solidifies, there’s always some latent heat released. And that inhibits more ice nearby. Well, what’s the simplest way to capture that effect? Just change the cellular automaton to say that ice gets added only if there’s exactly one neighboring cell of ice. OK, so what happens then? Well here’s the answer.

Enlarge

Play Animation

These are all the stages.

Enlarge Image

And these look an awful lot like real snowflakes. It really seems like we’ve captured the basic mechanism that makes snowflakes have the shapes they do. And we get various predictions: like that big snowflakes will have little holes in them. Which indeed they do.

OK, but even though our pictures have forms that look a lot like real snowflakes, there are details that are different. But the thing one has to understand is that that’s what’s going to happen with any model.

Because the whole point of a model is to capture certain essential features of a system—and then to idealize away everything else. And depending on what one’s interested in, one may pick out different features to capture.

And so the cellular automaton model is good, for example, if one’s interested in the basic question of why snowflakes have complicated shapes—or what the distribution of shapes in some snowflake population will be. But it’s not so useful if one’s trying to figure out specifically how fast each arm will grow at a certain temperature.

I might say that among natural scientists there’s a general confusion about modeling that often seems to surface when people first hear about things like cellular automata. They’ll say: OK, it’s all very nice that cellular automata can reproduce what snowflakes do, but of course real snowflakes aren’t actually made from cellular automaton cells.

Well, the whole point of a model is that it’s supposed to be an abstract way of reproducing what a system does; it’s not supposed to be the system itself. I mean, when we have differential equations that describe how the Earth moves around the Sun, we don’t imagine that inside the Earth there are all sorts of little Mathematicas solving differential equations. Instead, it’s just that the differential equations represent—abstractly—the way the Earth moves.

And it’s exactly the same thing with models based on cellular automata: the cells and rules abstractly represent certain features of a system. And again, what abstract representation—what type of model—is best, depends on what one’s interested in. For snowflakes there are traditional differential equations that could be used. But they’re complicated and hard to solve. And if what one’s actually interested in is the basic question of why snowflakes have complicated shapes, the cellular automaton model is a much better way to get at that and to work out predictions about it.

OK, let’s take another example. Let’s talk about a wonderful generator of complex forms: fluid turbulence.

Enlarge Image

Whenever there’s an obstacle in a fast-moving fluid, the flow around it looks complicated and quite random. But where does that randomness come from?

Well, I think there are really three ways randomness can arise in a system.

The most traditional is that randomness comes from external perturbations. Like, say, a boat that moves randomly because it’s being randomly tossed around by the randomness of an ocean surface.

Another source of randomness is chaos theory. Where randomness comes from details of how a system is started.

Like with a coin toss. Where which way it’ll land depends sensitively on its initial speed. And if it was started by hand, there’ll always be a little randomness in that—so the outcome will seem random.

Well, there are more elaborate versions of this, where one’s effectively taking numbers that represent initial conditions, and successively excavating higher- and higher-order digits in them. And from the perspective of ordinary continuous mathematics, it’s a bit tricky to account for where the randomness comes from here. But in terms of programs it becomes very clear that any randomness is just randomness one put in, in the detailed pattern of digits in the initial conditions. So again one ends up saying that “randomness comes from outside the system one’s looking at.”

OK, well. So is there any other possibility? Well, it turns out there is. Just look at rule 30.

Enlarge Image

Here one doesn’t have any randomness initially. One just has a single black cell. And one doesn’t have any subsequent input from the outside. But what happens is that the evolution of the system just intrinsically generates apparent randomness.

Well, that’s where I think most of the randomness in fluid turbulence—and a lot of other places—comes from. One can get evidence by making detailed models—that can be pretty important for practical applications.

Enlarge Image

But there’s also a general prediction. See, if the randomness comes from the environment or from details of initial conditions, it’ll inevitably be different in different runs of an experiment. But if it’s like rule 30, then it’ll always be the same every time one runs the experiment. So this says that in a carefully enough controlled experiment, the turbulence should be exactly repeatable. Apparently random. But repeatably so.

And that’s an important type of phenomenon in other systems too. Like in financial markets.

Well, OK, what about biology? That’s probably our richest everyday example of complex forms and complex behavior. But where does it all come from?

Ever since Darwin, people have figured it’s somehow got to do with adaptation and natural selection. But in reality it’s never been clear why natural selection should actually lead to much complexity at all. And that’s probably why—at least outside mainstream science—people often think there must be something else going on. But the question is: what is it?

Well, I think that actually it’s just the abstract fact—that we discovered with rule 30 and so on—that among simple programs it’s actually very easy to get complexity. Of course, the complete genetic program for a typical organism is pretty long and complicated—for us humans about the same length as the source code for Mathematica. But it’s increasingly clear that lots of the most obvious aspects of forms and patterns in biology are actually governed by rather small programs. And looking at some of the kinds of regularities that one sees in biological systems, that doesn’t seem too surprising.

Enlarge Image

But when one sees more complicated stuff, traditional intuition tends to suggest that somehow it must have been difficult to get. And with the natural selection picture, there’s sort of the idea that it must be the result of a long and difficult process of optimization—or of trying to fit into some complicated ecological niche.

Well, I actually think that that’s not where many of the most obvious examples of complexity in biology come from. Natural selection seems to be quite good at operating on small numbers of smooth parameters—lengthening one bone and shortening another, and so on. But when there’s more complexity involved, it’s very hard for natural selection to operate well. And instead, what I think one ends up seeing is much more just the outcome of typical randomly-chosen simple genetic programs. So that what we see in biology is in a sense a direct reflection of what’s out there in the computational universe.

Let me give you an example. Here are some mollusc shells with pretty complicated pigmentation patterns on them.

Enlarge Image

Well, in the past one might have assumed these must be difficult to get—the result of some sophisticated biological optimization process. But if you look at the patterns, they look incredibly similar to patterns we get from cellular automata like rule 30.

Well, in the actual shell, the pattern is laid down by a line of pigment-producing cells on the growing edge of the shell. And actually it seems that what happens can be captured rather well by a cellular automaton rule. But why one rule and not another?

If one looks at different species, one sees all sorts of different patterns.

Enlarge Image

But there are definite classes. And here’s the remarkable thing: those classes are the same classes of behavior that one sees if one looks at all possible simplest relevant cellular automata.

Enlarge Image

Well, that’s rather remarkable. It’s very much as if the molluscs of the Earth are little computers—sampling the space of possible simple programs, and then displaying the results on their shells.

You know, with all the emphasis on natural selection, one’s gotten used to the idea that there can’t be much of a fundamental theory in biology—and that practically everything we see must just reflect detailed accidents in the history of biological evolution. But what the mollusc shell example suggests is that that may not be so. And that somehow one can think of organisms as uniformly sampling a space of possible programs. So that just knowing abstractly about the space of programs will tell one about biology.

Let me give one more example. Shapes of leaves.

Enlarge Image

One might think they’re too diverse to explain in any uniform way. But actually it turns out that there’s a simple type of program that seems to capture almost all of them. It just involves successive repeated branchings.

Enlarge Image

And what’s remarkable is that the limiting shapes one gets look just like actual leaves, sometimes smooth, sometimes jagged, and so on.

Enlarge Image

Here’s a simple case where one can actually lay out all of the possible shapes that one gets.

Enlarge Image

And you can see that like with so many other simple programs, you can get all sorts of different forms just by changing the program a bit.

To be a little more sophisticated one can actually summarize features of all possible leaves in a parameter space set—that turns out to be a rather wonderful simpler, linear analog of the Mandelbrot set. And from the properties of this set one can deduce all sorts of features of leaves and their likely evolution.

Enlarge Image

Well, I’ve spent a long time figuring out how all sorts of biological forms get made—how they get grown. And there’s a lot to say about biology—not only at the level of macroscopic form, but also at molecular levels.

You know, if one looks at the history of biology, there’s an interesting analogy. About fifty years ago, there was all this data on genetics and heredity, and it seemed really complicated. But then along came the idea of digital information in DNA, and suddenly one could see a mechanism.

Well, there are now a bunch of people who think that my science might lead us to the same kind of thing for big issues in biology today, like aging and cancer. And that somehow by thinking in terms of simple programs, one may be able to see the right primitives, and tell what’s going on. Which of course would be very exciting.

Well, OK, let me turn for a few minutes back to physics. And in particular fundamental physics.

Now this is supposed to be an architecture talk, so I’m not going to go too deep into this. For the physicists though: read Chapter 9 of the book.

Well, so, what does the computational universe tell us about how our actual physical universe might work?

Traditional mathematical approaches have definitely done pretty well in physics. But still they haven’t been able to give us a truly fundamental theory of physics. And I think the reason is that one needs more primitives than just the ones from traditional mathematics. And now that we’ve seen some of what else can happen in the computational universe, one can’t help wondering whether perhaps all the things we see in our universe couldn’t just be the result of some particular simple program.

That’d be pretty exciting. To have a little program that’s a precise ultimate model of our universe. So that if one just runs that program long enough it’ll reproduce every single thing that happens in our universe. But, OK, what might such a program be like? Well, one thing that’s kind of inevitable is that very few familiar features of our universe will immediately be visible in the program. There just isn’t room. I mean, if the program is small, there’s just no way to fit in separate identifiable pieces that represent electrons, or gravity, or even space or time. And in fact I think that if the program’s going to be really small, it sort of has to have the very least possible structure already built in.

And for example I think a cellular automaton already has far too much structure built in. For example, it’s already got a notion of space. It’s got a whole rigid array of cells laid out in space.

And it separates the notion of space from the notion of states of cells.

And I don’t think one needs that. I mean, in ordinary physics, space is a kind of background—on top of which matter and everything exists. But I think that in an ultimate model, one in effect only needs space—one doesn’t need any other basic concepts.

Well, OK, so given that, what might space be? We normally think of space as just being something that is—not something that has any kind of underlying structure. But I think it’s a little like what happens with fluids. Our everyday experience is that something like water is a continuous fluid. But we actually know that underneath, it’s made up of lots of little discrete molecules. And I think it’s something similar with space. That at a small enough scale, space is just a huge collection of discrete points. Actually, really, a giant network.

Enlarge Image

Where all that’s specified is how each node—each point—is connected to others. I’m not going to go far into this here. There’s a lot more in the book.

It’s a subtle business. With some fascinating modern math. But let me just say that when there are enough points, there’s effectively an emergent geometry. And one can get flat or curved space, in a definite number of dimensions, like three.

Enlarge Image

Well, what about time? In a cellular automaton there’s a global clock. But here it’s much more subtle. The network might evolve by updating pieces with rules like these.

Enlarge Image

So perhaps in some simple case you get behavior like this.

Enlarge

Play Animation

But it turns to be really hard to tell what order the updates should be done in. And one might just think that somehow every order happens, so that there’s a whole tree of possible histories for the universe.

But it turns out that there’s another possibility. Certain rules have a rather unexpected property that I call causal invariance.

Enlarge Image

Which implies in effect that whatever order the updates get done in, one always gets the same causal relations between actual events in the universe.

And here’s the exciting thing: not only does this make there be a definite thread of time in the universe; it also immediately implies special relativity.

Enlarge Image

Normally in physics one just puts relativity in as an assumption—but here it comes out.

And that’s not all. It looks as if in a certain class of these network systems general relativity and Einstein’s equations for gravity come out. Which is really quite amazing.

It’s subtle and sophisticated stuff. But there are also a lot of hints that from these networks, with their little replacement rules, one can get many fundamental features of quantum mechanics and quantum field theory.

But so how is one going to find the program for the universe? Well, if it was a complicated one we would have no choice but to do what we’ve normally done in physics—or in science in general—which is to sort of reverse-engineer the program from data we see. But if the program is simple enough, there’s another possibility: we can just search for it.

Enlarge Image

Just look out in the computational universe, trying rule after rule and seeing if any of them is our universe.

At first, that seems completely crazy. But if the rule really is simple, it’s not crazy at all. Of course it’s not easy. It takes a lot of technology. Algorithms. New methods of automation. And you can watch for those things as they spin off into future versions of Mathematica.

It’s going to be fascinating—and perhaps humbling—to see just where our universe is. The hundredth rule? Or the millionth? Or the quintillionth? But I’m increasingly optimistic that this is all really going to work. And that eventually out there in the computational universe we’ll find our universe. With all of our physics. And that will certainly be an exciting moment for science.

Well, OK. I want to come back now to the original discovery that really launched everything I’ve been talking about: the discovery that out in the computational universe even simple programs—like rule 30—can produce immensely complex behavior.

Enlarge Image

So why does that happen? What’s the fundamental reason? To answer that one needs to set up a somewhat new conceptual framework.

And the basis of that is to think about all processes as computations. The initial conditions for a system are the input. And the behavior that’s generated is the output. Well, sometimes the computations are ones that we—kind of—immediately know the point of. Like here’s a cellular automaton that computes the square of any number.

Enlarge Image

You give it a block of n cells, and it generates a block of n^2 cells. And here’s a cellular automaton that generates the primes.

Enlarge Image

But actually any cellular automaton can be thought of as doing a computation. It just isn’t necessarily a computation that we know the point of beforehand.

OK, so we have all sorts of systems and they do all sorts of computations. But how do all these computations compare? Well, we might have thought that every different system would always do a completely different kind of computation. But the remarkable idea that’s now about 70 years old is that no, that’s not necessary. Instead, it’s possible to make a universal machine that can do any computation if it’s just fed the right input. And of course that’s been a pretty important idea. Because it’s the idea that makes software possible, and really it’s the idea that launched the whole computer revolution. Though in the past, it never too relevant to things like natural science.

But OK, so what about all these various cellular automata like rule 30?

Enlarge Image

Or all the systems we see in nature? How sophisticated are the computations that they’re doing?

Well, I spent a long time thinking about this and accumulating all sorts of evidence. And what I ended up concluding is something that at first seems pretty surprising. I call it the Principle of Computational Equivalence. It’s a very general principle, and in its roughest form what it says is this: that essentially any time the behavior of a system looks to us complex, it will end up corresponding to a computation of exactly equivalent sophistication.

If we see behavior that’s repetitive or nested then it’s pretty obvious that it corresponds to a simple computation.

Enlarge Image

But what the Principle of Computational Equivalence says is that when we don’t see those kinds of regularities, we’re almost always dealing with a process that’s in a sense maximally computationally sophisticated.

Now at first that’s pretty surprising. Because we might have thought that the sophistication of the computations that get done would depend on the sophistication of the rules that got put in. But the Principle of Computational Equivalence says it doesn’t. And that immediately gives us a prediction. It says that even though their rules are extremely simple, systems like rule 30 should be computation universal.

Enlarge Image

Well, normally we’d imagine that to achieve something as sophisticated as computation universality we’d somehow need sophisticated underlying rules. And certainly the computers we use that are universal have CPU chips with millions of gates, and so on.

But the Principle of Computational Equivalence says you don’t need all of that. It says that even cellular automata with very simple rules should be universal. Well, here’s one them. This is rule 110. The hundred-and-tenth of the elementary cellular automata I showed earlier.

Enlarge Image

It’s got that really simple rule at the bottom. But as you can see, it does some fairly complicated things. It’s got all these structures running around that seem like they might be doing logic or something.

But can one really assemble them to get something one can see is universal? Well it turns out that one can. And that means that rule 110 is indeed universal!

Well, that’s just what the Principle of Computational Equivalence said should be true. But it’s still a remarkable thing. Because it means that this little rule can in effect produce behavior that’s as complex as any system.

Enlarge Image

One doesn’t need anything like a whole computer CPU to do universal computation. One just needs this little rule. And that has some very important consequences. Particularly when it comes to thinking about nature. Because we wouldn’t expect to find whole computer CPUs just lying around in nature.

But we definitely can expect to find things with rules like rule 110. And that means for example that lots of everyday systems in nature are likely to be universal. Which is pretty important for both science and technology.

By the way, for computer science folk: for 40 years this had been the simplest known universal Turing machine.

Enlarge Image

But now we know that this one is actually universal.

Enlarge Image

And in fact I suspect that this little thing is the very simplest Turing machine that’s universal.

Enlarge Image

Well, there’s a lot to say about what the Principle of Computational Equivalence is, and what it means. One thing it does it to make Church’s thesis definite by saying that there really is a hard upper limit on the computations that can be done in our universe.

But the place where the principle really starts to get teeth is when it says that not only is there an upper limit—but that limit is actually reached most of the time. With incredibly simple rules one’ll often get just simple behavior that’s, say, repetitive or nested. But the point is that if one makes the rules even a tiny bit more complicated, then the Principle of Computational Equivalence says that one immediately crosses a threshold—and ends up with a system that’s maximally computationally sophisticated. And it also says that this should happen for lots of initial conditions—not just special ones.

OK, so what does all this mean? Well, first of all it gives us a way to answer the original question of how something like rule 30 manages to show behavior that seems so complex.

Enlarge Image

The point is that there’s always a competition between an observer and a system they’re observing.

And if the observer is somehow computationally more sophisticated than the system, then they can in a sense decode what the system is doing—so it’ll look simple to them. But what the Principle of Computational Equivalence says is that in most cases the observer will be exactly computationally equivalent to the system they’re observing. And that’s then why the behavior of the system will inevitably seem to them complex.

Well, a related consequence of the Principle of Computational Equivalence is a very important phenomenon that I call computational irreducibility. Let’s say you know the rules and the initial condition for a system. Well, then you can certainly work out what the system will do by explicitly running it. But the question is whether you can somehow shortcut that process. Can you for example just work out a formula for what will happen in the system, without ever explicitly having to trace each step? If you can, then what it means is that you can figure out what the system will do with a lot less computational effort than it takes the system itself.

And that kind of computational reducibility is at the core of most traditional theoretical science. If you want to work out where an idealized Earth will be a million years from now, you don’t have to trace all its million orbits; you just have to plug a number into a formula and get out a result. But the problem is: what happens when the behavior is more complex? If a system is repetitive—or even nested—it’s easy to shortcut things. But what about a case like this? Or lots of the other things we saw in our experiment.

Enlarge Image

There’s certainly no obvious way to shortcut this. And in fact I think it’s computationally irreducible: there’s essentially no way to work out what the system will do by any procedure that takes less computational effort than just running the system and seeing what happens.

In traditional theoretical science, there’s sort of been an idealization made that the observer is infinitely computationally powerful relative to the system they’re observing. But the point is that when there’s complex behavior, the Principle of Computational Equivalence says that instead the system is just as computationally sophisticated as the observer. And that’s what leads to computational irreducibility. And that’s in a sense why traditional theoretical science hasn’t been able to make more progress when one sees complexity. There are always pockets of reducibility where one can make progress, but there’s always a core of computational irreducibility.

Well, I think computational irreducibility is a pretty important phenomenon. With a lot of consequences—both practical and conceptual.

At a practical level, it shows us that computer simulation isn’t just convenient, but fundamentally necessary. Which puts more pressure on finding the computationally simplest underlying models.

At a more conceptual level, it gives us a new way to understand the validity—and limitations—of the Second Law of Thermodynamics.

And at a philosophical level, it finally gives us a concrete mechanism for the emergence of apparent free will from deterministic underlying laws.

Computational irreducibility is also behind the phenomenon of undecidability—originally discovered in the 1930s. Look at this cellular automaton.

Enlarge Image

With the first initial condition, we can quickly tell that it dies out. With this one, it takes 1017 steps. But what’s going to happen in the end in these cases? If there’s computational irreducibility it may take us an infinite time to find out—so it’s formally undecidable.

Well, undecidability has been known about in mathematics and in computer science for quite a long time. But with the Principle of Computational Equivalence one realizes now that it’s also relevant to natural science. If one asks questions about infinite time or infinite size limits, the answers can be undecidable. Like whether a body will ever escape in a gravitational three-body problem.

Enlarge Image

Or whether some idealized biological cell line will grow forever or will eventually die out. Or whether there’s a way to arrange some complicated molecule into a crystal below a certain temperature. Or, for that matter, whether particular idealized urban-planning rules will lead to infinite urban sprawl.

Well, there’s another big place where I think computational irreducibility is very important, and that’s in the foundations of mathematics.

It may sound kind of obvious, but it’s really a deep observation about mathematics that it’s often hard to do. Yet it’s based on pretty simple axioms. In fact, right here are the ones for essentially all of current mathematics.

Enlarge Image

But even though these axioms are simple, proofs of things like Fermat’s Last Theorem are really long. And it turns out that one can think of that as just another case of the phenomenon of computational irreducibility.

There’s a lot to say about this. And for people who are interested, it’s discussed in Chapter 12.

But let me just say a few things here. Here’s a visual representation of a simple proof in mathematics.

Enlarge Image

You start at the top. Then at each step use one of the axioms. And eventually prove the theorem that the expression at the bottom is equivalent to the one at the top.

Well, OK, so as a minimal idealization of math, imagine that the axioms just define transformations between strings. So with the axioms at the bottom here, here are proofs of a few theorems.

Enlarge Image

But how long do the proofs need to be? Well, here’s a picture for three axiom systems showing the network of all possible transformations.

Enlarge Image

And the way this works, every possible path through each network corresponds to the proof of some theorem. Well, the point is that the shortest path from one particular string to another may be really long. And that means that the theorem that the strings are equivalent has only a really long proof.

Well, when people were thinking about formalizing mathematics a century ago, they kind of assumed that in any given axiom system it’d always eventually be possible to give a proof of whether a particular statement was true or false. So it was a big shock in 1931 when Gödel’s theorem showed that that wasn’t true for Peano arithmetic—the standard formal axiom system for ordinary integer arithmetic. What Gödel actually did, though, was to look at the funky self-referential statement “this statement is unprovable.” Well, just by what it says, this statement fairly obviously can’t be proved true or false. But as such it doesn’t seem like a statement in arithmetic. And Gödel’s real achievement was essentially to show that arithmetic is universal, so that in particular it can encode his funky statement. Well, for the foundations of math Gödel’s theorem was a pretty big deal. But somehow in all these years it’s never seemed too relevant to most of the things working mathematicians deal with. And if you needed something like Gödel’s funky statement to get undecidability that wouldn’t be surprising. But here’s the thing: the Principle of Computational Equivalence should be general enough to apply to systems in mathematics. And it then says that computational irreducibility—and undecidability—should actually not be rare at all. So where are all these undecidable statements in mathematics? Well, it’s been known for a while that there are integer equations—so-called Diophantine equations—about which there are undecidable statements.

Here’s an actual example—a Diophantine equation explicitly set up to emulate rule 110.

Enlarge Image

Well, this is obviously pretty complicated, and not something that would show up everyday. But what about simpler Diophantine equations? Here are a bunch.

Enlarge Image

Well, linear Diophantine equations were cracked in antiquity. Quadratic ones around 1800. And so far another kind seems to be cracked roughly every fifty or a hundred years. But I’m guessing that that’s not going to go on. And that actually many of the currently unsolved problems in number theory will turn out to be undecidable. OK, but why has so much math successfully been done without running into undecidability? I think it’s kind of like theoretical physics: it’s tended to stick to places where there’s computational reducibility, and where its methods can make progress. But at least in recent times mathematics has prided itself on somehow being very general. So then why haven’t rule 30 and rule 110 and all the phenomena I’ve found in simple programs shown up? Well, I think part of the reason is that mathematics isn’t really quite as general as advertised. To see what it could be, one can imagine just enumerating possible axiom systems.

Enlarge Image

And for example this shows what theorems are true for a sequence of different axiom systems. It’s like an ultimately desiccated form of mathematics: the axioms go down the left, the theorems go across the top—and every time there’s a theorem that’s true for a particular axiom system there’s a black dot. So is there something special about the actual axiom systems that get used in mathematics? Perhaps something that makes undecidability less rampant? Well, if one looks at axiom systems from textbooks, they’re usually pretty complicated. Here’s logic, for instance.

Enlarge Image

Well, it’s been known for a hundred years that one doesn’t have to have all those operators in there. The single Nand operator—or Sheffer stroke—is enough. But the obvious axiom system with that is still pretty complicated.

Enlarge Image

Well, from all the intuition I’d built up about simple programs, I suspected that there should actually be a really simple axiom system for logic—probably with just one axiom. And so I just did a search for it, and lo and behold, I found it.

Enlarge Image

And I can tell you that this is the very simplest possible axiom system for logic. Here’s the proof, by the way—needless to say generated by computer.

Enlarge Image

Well, knowing this I can say that if one just enumerates axiom systems, logic will be about the 50,000th one that one reaches.

But what about all the other ones? Well, most of them are perfectly reasonable axiom systems too. They just don’t happen to be known fields of mathematics. And actually I think mathematics as it’s developed has been in a sense tremendously constrained: at some level it’s really still pretty much just various direct generalizations of the arithmetic and geometry that got studied in ancient Babylon. Well, if one just looks out into the computational universe, one immediately sees a much wider world—a huge generalization of what mathematics has been so far.

Enlarge Image

Well, there’s a lot to learn from the computational universe. And I want to mention one thing the Principle of Computational Equivalence says about sort of a big question: our place in the universe.

It’s always been natural for us to think that we as humans are very special. But the history of science keeps on showing us ways in which we’re not. For example, four hundred years ago we found out that our Earth isn’t at a special place in the universe. And a century and a half ago we found out that there wasn’t anything special about the origin of our species.

Well, every time we lose something in specialness, science gets more general—because it can drop another footnote that says “except in the case of humans.” But right now we still often think we’re special in our level of complexity—or our computational ability.

But one of the big statements that the Principle of Computational Equivalence makes is that that isn’t right. It says that there are lots of simple abstract systems—and systems in nature—that are exactly equivalent in terms of their computational sophistication.

Well, one sometimes has that impression anyway—for example when one says something like “the weather has a mind of its own.” But what the Principle of Computational Equivalence now says is that, yes, fluid turbulence in the atmosphere will correspond to as sophisticated a computation as anything we do. So we’re not special that way.

And that has many consequences. Like in the search for extraterrestrial intelligence. Where we might have thought if we saw a computationally sophisticated signal that it would take a whole extraterrestrial civilization to produce. But where we now realize that simple natural processes could suffice.

It also gives us a concrete way to discuss the concept of purpose—and when that’s the correct basis for explaining something.

You know, it’s sort of interesting to see what this all means about the ultimate limits of technology. Well, I don’t have any doubt that there’ll be a time—potentially quite soon—when it’ll be possible to capture all the important features of human thinking in pieces of solid-state electronics. And no doubt things will get more and more efficient until everything is on an atomic scale—so that our processes of human thinking are just implemented by individual electrons whizzing around in lumps of something.

Well, of course there are electrons whizzing in all sorts of complicated patterns in a piece of rock too. And what the Principle of Computational Equivalence tells us is that we can’t expect the patterns made by the ones that represent human thinking to be ultimately any more sophisticated than those that occur naturally in a rock.

There isn’t any sort of abstract essence of human intelligence that one can identify. But what of course is still special about us is all our details and all our history. And in a sense it’s the Principle of Computational Equivalence that shows us that that history can really add up to something. Because if everything was computationally reducible, then in a sense nothing could be achieved by history: we’d always be able to get to the same end point without all that effort.

It’s interesting what the Principle of Computational Equivalence ends up saying. It kind of encapsulates both the great strength and the great weakness of science. Because on one hand it says that all the wonders of our universe can be captured by simple rules. Yet it also says that there’s ultimately no way to know the consequences of these rules—except in effect just to watch and see how they unfold.

Well, it’s remarkable to me what’s grown from those little computer experiments I did in the early 1980s. It’s been very exciting. You know, when I started writing my big book in 1991 I thought it wouldn’t take terribly long. But I just kept on finding all this wonderful stuff. And even though I continued to be the CEO of a very active company—I ended up working for ten and a half years on the book.

I finally published it two and a half years ago. And of course, at the beginning, in true paradigm-shift fashion, there was a certain amount of wild and woolly behavior. But a couple of years out, it’s incredibly encouraging how things are developing. Lots of people from lots of different areas getting involved. A growing excitement in a great many areas—from linguistics to biology to urban planning to image processing. Lots of good things getting done.

Actually, it’s quite overwhelming. Every day there’s a new paper or two being published. Spread across almost every conceivable discipline. It’s certainly far, far too much for me to keep up with.

But, I think one can see three big threads emerging, in what’s now often getting called—after the title of the book—”NKS.”

First, a new area of basic science. That one day I think will be like a physics, or a chemistry, or a mathematics. But concerned with understanding what’s out there in the computational world.

That’s a kind of “pure NKS.”

And from that pure NKS, one can start to do applications. Take NKS results, and methods, and use them in science, in technology and elsewhere. Because now one has lots of new raw material for making models of things—including perhaps our whole universe. But also lots of new directions for technology. Because now we have all these new mechanisms—not just gears and wheels, but things like rule 30 too—that give us new ways to approach constructing algorithms, or doing artificial biology, or nanotechnology, or whatever.

There’s also kind of a conceptual direction. Understanding more about the fundamental character of science—and mathematics—and about the place we have in our universe. And in a sense giving a new framework for thinking about things in general—whether for philosophy or business. And giving for example a new foundation for a basic thread in education—sort of an alternative to mathematics.

It’s really an exciting time. Sort of like a whole intellectual industry starting up.

You can see some of it on the website.

Go to Live Site

All sorts of material on the web. Mathematica programs to download. A summer school we’re doing each year at Brown.

Enlarge Image

To which I encourage people to apply. A Forum. Soon probably regular webconferenced live experiments. And a whole bunch of other things.

One particularly big thing is a systematic atlas of the computational universe—an Atlas of Simple Programs. It’s all built with Mathematica. And it’ll be a terabyte or so about simple programs, and what they do.

It’s a bit like a database of astronomy, or organic chemistry, or the human genome. A combination of a map of what’s out there, and what seems interesting.

And there are lots of ways to use it. One can try to classify, and systematize, and do basic research.

But one can also do applications.

Let’s say one has some phenomenon. And one wants to find a model for it. Well, in general that’s very hard. But here’s the remarkable thing: if there’s a model that’s a simple enough program—then one can just search for it. One can look in the Atlas—look through a trillion possibilities or something—and just find a good model then and there.

Nothing like that would be conceivable with traditional kinds of models—they’re much too complicated. But with simple programs it’s a different story. Maybe, as I said, it’ll even work for finding a model for our whole universe.

But let’s say one just has some sequence of integers, or a particular pattern of squares. Well, then the Atlas could tell you the simplest program that can quickly make that.

But the Atlas isn’t just for science. It can also be mined for technology. Finding easy ways to build complicated structures, for nanotechnology or bioengineering or, for that matter, building buildings. Or finding algorithms: for cryptography, or compression, or distributed control, or pattern matching, or whatever.

See, most of the algorithms we have today were made by explicit engineering. And at some level they’re based on pretty simple behavior. Like iteration or recursion, corresponding to repetition or nesting. And it’s the same with typical structures in, say, circuit design or mechanical engineering.

But if you just systematically search the first trillion Turing machines you find lots of other things going on—and potentially lots of useful algorithms and structures. Just out there in the computational universe. Ready to be mined. As soon as one has the tools, and understands the science and the methodology.

Well, there’s a lot to talk about. But before I finish here I wanted to take a few minutes to talk about how all of this might relate more specifically to architecture and design.

You know, right now we use nature as an inspiration for a lot of the more complex patterns and structures we make. And I think that’s not just because it’s familiar. But also because it’s been the only general place we’ve been able to go for examples of things that are rich and complex, yet still somehow have rules that seem to make them “make sense.”

But what we now know is that there’s a vast computational universe out there, full of other examples. In a sense, artificial forms of nature. That share the essence of actual nature. But have their own rules. With their own detailed features.

And if we want to find some design or structure that meets certain criteria, we can potentially just search for it, scanning over different programs out there in the computational universe. And in a sense each program is like a different design idea. So as we search the Atlas of Simple Programs, for example, we’re kind of automating the process of getting new design ideas.

Of course, it’s a human issue what the constraints are—that define what designs we consider useful or pleasing. We can’t expect to automate everything—just to let Mathematica run and make a final design. But we can use Mathematica as a crucial tool in generating and analyzing and searching for designs. And the computational universe is a fantastically rich source of raw material.

In the past, architecture has borrowed from science—or perhaps science borrowed from architecture—a lot of traditional mathematical notions. About proportion. About ways to make polygons and polyhedra. And tessellations. And algebraic curves. And so on. But just as we’ve seen that there’s more one needs in natural science than traditional mathematics, so I suspect it will be in architecture. That those other things out there in the computational universe are important.

You know, in architecture—and in ornament—repetition is a very common theme. And in plans of buildings—and sometimes in ornament—there’s also nesting. But normally the only way more-complex forms enter is through some kind of imitation of nature. But now, from the computational universe we have a much broader source of these forms. And already, since the book came out, there’s been a lot of interest in these, in the visual arts, in music, in architecture.

Here’s a page of cellular automata from the book displayed on a wall at the Cooper-Hewitt museum.

Enlarge Image

And there are all sorts of renderings of cellular automata that people have sent us. From simple floor tilings.

Enlarge Image

To hanging fabric art.

Enlarge Image

To abstract movies.

Enlarge Image

To interactive wall displays.

Enlarge Image

To tapestries.

Enlarge Image

To stick sculptures.

Enlarge Image

To cake decorations.

Enlarge Image

To knitted tea cozies.

Enlarge Image

To peculiar pieces like this.

Enlarge Image


And of course one can have all sorts of renderings: one can sort of map the cells in a cellular automaton onto any vocabulary. Like here’s rule 30 with the cells rendered as idealized stones.

Enlarge Image

Or abstract icons.

Enlarge Image

Those both happen to be from the Graphica books that we published of Mathematica graphics.

Enlarge Image

Or one can render the cells as elements of a building.

Enlarge Image

Or not as cells at all, but say segments of lines.

Enlarge Image

There are a lot of ways to use even just the elementary cellular automata. Let alone all the other kinds of programs out there in the computational universe.

Enlarge Image

Sometimes it’s really useful just to be able to have a simple rule that can be known and applied locally—and that people can build locally, or experience locally. Yet from which a rich and complex pattern can emerge globally.

Sometimes you’d never imagine that an aesthetically pleasing form could exist that would satisfy certain constraints. But searching in the computational universe you can find it.

And here’s something else. In the computational universe it’s easy to create original and different forms. So the economics of creating unique custom forms change completely.

In the past, if you wanted a custom pattern or form created, it had to be expensive. But once one has good tools to explore and search the computational universe, it becomes cheap.

So that means that one can in a sense “mass customize” designed objects. Creating unique patterns for displays or fabrics or wallpaper or tilings or ornament, or whatever. Even having unique original patterns be created in real time, say for videogame backgrounds, or dynamic tilings, or real-time composition.

And one can do this not only in the visual domain, but also in other ones. Particularly the auditory one. Algorithmically generating unique musical forms. Not just in a random way. But in a way that’s completely governed by rules. Very simple rules.

And by the way, in the auditory domain, there are immediate needs for unique forms. Most notably in what one can call personal signaling—for example making unique cellphone ringtones.

Well, OK. So what might the future of this kind of design look like? It’s sort of what happens when you don’t use pencils and French curves—or fixed CAD templates—but Mathematica and the Atlas of Simple Programs. Let me show a few examples. All of which I should say are kind of rough. Most of them in fact I just made.

CA Tiles:

CA Tiles

Leaf Shapes:

Leaf Shapes

CA Buildings:

@D Buildings


So… there are a few examples of what an NKS and Mathematica future might be like. These things are really just toys. But what’s nice is that with the technology of Mathematica you can go from here to lots of practical things very easily. Making more realistic renderings. Exporting to DXF or STL or whatever. Doing structural analysis. Modeling how people flow through a building. Or whatever.

It’s nice to see in the last few years that CAD has really taken hold in architecture. Mathematica is kind of the next logical step. Not just drawing, but computing.

And of course you can use it to do wonderful traditional things. Staying within the kind of vocabulary that traditional science and mathematics suggests.

But once one’s got the technology platform—one can go much further. Not just using the ideas and the forms of traditional science and mathematics. But doing something much more like nature seems to—and exploring the whole computational universe that’s out there.

Well, I think I should wrap up here.

I wish I’d been able to tell you more here today. I’ve really only been able to scratch the surface. But I hope I’ve been able to communicate at least a little of what’s in that big book of mine. And where it might lead. And what I’ve been so excited about all these years.

Thank you all very much.

Publications by Stephen Wolfram »