Biomedical Implications of A New Kind of Science

Given as the 2006 Harvey Preisler Memorial Lecture at the UMass Medical Center.

Well, I should explain that I’m not a biomedical person. I’m just a scientist and a CEO. But it’s fun to be here today, because it gives me the opportunity to reflect a bit on how the things I do might relate to biomedicine.

For about the past 25 years, I’ve been building an ambitious new kind of science. And people often tell me that the things I’m doing have a lot of implications for biomedicine.

I’m not sure I’ve understood everything they’ve said. But what I wanted to do here today is to talk a bit about my science. And then to say a little about how, as really an outsider to biomedicine, I think it might relate to things you all do.

I started out—when I was a teenager actually—as a physicist. Doing particle physics, and quantum field theory, and cosmology, and things.

And particularly from the work I did in cosmology, I got interested in a very basic question in science. Just how is that complicated stuff gets made in the universe? Whether it’s galaxies, or snowflakes, or animals, or brains.

I mean, it seems really easy for nature to make all this complicated stuff. But how does it do it? What’s the basic mechanism?

Well, you know, for about the past three hundred years, there’s been a pretty definite idea about the way to answer fundamental science questions like this: you turn to equations and math and things. And certainly that’s a great approach for many things. In fact, I make my living creating the software that a large fraction of people in the world who do serious things with equations and math and so on use.

But what I started to realize 25 years ago is that equations and math and so on aren’t enough.

Try as one might, they just don’t explain all this complicated stuff one sees all the time in nature.

So what might? Well, here’s the big idea I had, about 25 years ago now. I realized that there’s a very basic assumption that was being made in exact science. That somehow the rules—the mechanism—for how things fundamentally work can be expressed in terms of constructs in math, like numbers and exponentials and calculus and so on, that we’ve invented. And what I realized is that we don’t have to restrict ourselves that way. That actually there are lots of rules—mechanisms—that we can think about, but that don’t happen to be easily expressible with the math we have.

Computers are what show us the way. After all, computer programs in a sense just define rules. But the point is that they can define any kind of rule, not just the kinds of rules we’ve been used to in math.

Now of course most programs we deal with everyday are really complicated: thousands or more likely millions of lines of code, specifically set up for particular tasks we’re interested in. Not really things we can immediately do basic science on.

But here’s the question I first asked 25 years ago: what if we just look at very simple programs?

What if we start exploring the computational universe of possible programs, starting with the simplest ones?

Well, let me show you what happens. Here’s an example of a really simple kind of program—it’s called a cellular automaton.

Graphics:images/thumbnails/Slide001_t.gifEnlarge Image

It’s convenient because it’s really visual.

It’s made up of a line of cells—just theoretical cells, I’m afraid, not biological ones. Each cell is colored black or white.

And the way it works is that at each step going down the page each cell updates its color according to a definite rule. Let’s take the following rule: each cell looks at itself and its two neighbors. If any of those cells is black, then the cell becomes black.

OK, so let’s see what that does. Starting from one black cell at the top, it just makes a simple triangle pattern.

Graphics:images/thumbnails/Slide002_t.gifEnlarge Image

We can summarize the rule by that icon at the bottom there. That shows what will happen with each arrangement of the colors of neighboring cells.

Graphics:images/thumbnails/Slide003_t.gifEnlarge Image

But now let’s ask what happens if we change the rule a bit. Let’s just change one case. See what happens.

Graphics:images/thumbnails/Slide004_t.gifEnlarge Image

OK. Now we get a checkerboard. Well, so far none of this is terribly surprising. We’re using a really simple rule. And we’re getting simple patterns out.

It’s kind of what we might expect would happen. But let’s go on exploring a bit in the computational universe.

Graphics:images/thumbnails/Slide005_t.gifEnlarge Image

OK, here’s another rule. Same setup as before, just different details about when a cell is made black and so on.

Well, the pattern we get is a bit more complicated now. It doesn’t seem just to repeat. But if we keep going a bit longer, we can see it gets very intricate.

Graphics:images/thumbnails/Slide006_t.gifEnlarge Image

But it’s still very regular—a nested fractal pattern.

Well, OK. But one of the great things about these simple rules is this: they’re simple enough that we can just look at all of them. Kind of exhaustively explore every part of this particular corner of the computational universe.

There are 256 of these particular simple cellular automata. Here are the first 64 of them.

Graphics:images/thumbnails/Slide007_t.gifEnlarge Image

All of these are the same kind of rule. But we can see in this picture that there’s a remarkable amount of diversity in what the rules do.

Many of them do pretty trivial things. You start with one black cell, and all you ever get is one black cell. Or you just get an expanding triangle of black.

But let’s take a more careful look here. Look at rule 30, for example.

Here’s a picture of it.

Graphics:images/thumbnails/Slide008_t.gifEnlarge Image

What’s going on? Let’s see what happens if we go further.

Graphics:images/thumbnails/Slide009_t.gifEnlarge Image

It’s really a remarkable thing. We just started from one little black cell at the top. And we’re just using that rule at the bottom there.

And just from that, we make this whole pattern that has a little regularity, but looks in many ways just completely random.

It seems like it can’t be right. I mean, our usual intuition, from doing engineering and things, is that to make something complicated, you kind of have to put all sorts of complicated things in. But here we’re putting this tiny rule in, and we’re getting all this complicated stuff out.

Well, I think this is something pretty important. It’s kind of like our analog of Galileo’s moons of Jupiter. I mean, when Galileo first turned a telescope to the sky four hundred years ago, he saw something completely unexpected: moons orbiting Jupiter.

And from that intuition-breaking observation about mechanics grew a lot of what’s become modern mathematical science.

Well, in our time we’ve been able to explore a new frontier: the computational universe. And I must say it was pretty exciting when I first turned my sort of computational telescope to the computational universe, and found rule 30.

And in sense it’s from what I saw with rule 30 that I’ve ended up building my whole new kind of science.

And you know, once you start looking at the computational universe, it’s teeming with things like rule 30.

Graphics:images/thumbnails/Slide010_t.gifEnlarge Image

There’s nothing special about the particular setup of our rule 30 cellular automaton rules. It doesn’t matter that all the cells operate in parallel, rather than one at a time, or that they’re all in a line.

Here are some patterns you get in two dimensions, for instance.

Graphics:images/thumbnails/Slide011_t.gifEnlarge Image



Graphics:images/thumbnails/Slide012_t.gifEnlarge Image

It doesn’t even matter that there are discrete cells. In fact, this whole phenomenon that we’ve seen in rule 30 actually happens even in traditional mathematical systems. It was just that nobody ever imagined it’d be there, so it never got looked for. And when there were little hints of it, they just didn’t seem “interesting”, because there wasn’t really a paradigm to fit them into.

And of course that’s a big thing to build, but that’s what I’ve tried to do.

Kind of the first step is to create a new kind of basic science. Kind of like a physics, or a chemistry. But concerned with what’s out there in the computational universe.

I suppose the Babylonians could in principle have done it.

Graphics:images/thumbnails/Slide013_t.gifEnlarge Image

Making rule 30s out of mosaic tiles. And then probably the history of science would have been very different.

But really what makes it possible to explore the computational universe is our modern computational tools. When I built our Mathematica software system, I tried to do it in the most general way possible.

Graphics:images/thumbnails/Slide014_t.gifEnlarge Image

And it’s been really exciting to be able to turn Mathematica at the computational universe, and be able to see all this stuff out there. Which at first we can just study sort of as collectors, like doing natural history, but then start to classify. And slowly begin to find some general principles.

Well, I think we’ve now got some pretty powerful ones. Here’s perhaps the most important: I call it the Principle of Computational Equivalence.

Here’s how it works. When we look at all these systems, doing all the things they do, they have one thing in common: they’re all doing computations.

Now, sometimes we know what computations they’re doing. Like here’s a cellular automaton set up to compute the square of any number.

Graphics:images/thumbnails/Slide015_t.gifEnlarge Image

But any system—even rule 30—is also doing a computation.

Graphics:images/thumbnails/Slide016_t.gifEnlarge Image

It just doesn’t happen to be a computation that we kind of know the point of beforehand.

Well, the question is: how do these computations compare?

Now, we might think that the more complicated the rules we set up, the more complicated the computations they’d do would be. But here’s the very surprising thing I’ve learned from exploring the computational universe: that’s just not true.

Yes, if we look at all the different rules, quite a lot of them do fairly trivial things.

Graphics:images/thumbnails/Slide017_t.gifEnlarge Image

But once we’re looking at something like rule 30, nothing fundamentally more ever happens. Even with the incredibly simple rules of rule 30, we’ve already got enough to reach sort of the highest possible level of computational sophistication.

Graphics:images/thumbnails/Slide018_t.gifEnlarge Image

And this is kind of the core of the Principle of Computational Equivalence. That as soon as a system isn’t showing obviously simple behavior, it’s showing behavior that corresponds to as sophisticated a computation as any other system.

So what does this mean? Well, we know something from working with practical computers.

One might have thought that every time one wanted to do a different kind of task, one would have to get a different computer. But the critical thing that was realized about 70 years ago is that, no, that’s not true. It’s possible to have a single universal computer where, just by changing the software that’s fed in, it can be made to do any computation. And of course that idea has been pretty important in the world. Because among other things, it’s what made software possible, and what launched the computer revolution happen.

And now, what the Principle of Computational Equivalence tells us is that actually this idea is important in things like natural science too.

Because it predicts that even among things like rule 30 we’ll see for example universal computation. Well, we don’t yet know for sure about rule 30. But here’s another rule with the exact same setup as rule 30. Rule 110.

Graphics:images/thumbnails/Slide019_t.gifEnlarge Image

And after painstaking work, we know that yes, those structures that we can see running around in it can be assembled, like logic circuits, to make a universal computer.

And that’s a remarkable thing. It’s what the Principle of Computational Equivalence told us should be true. But it shows us that, yes, that little rule down there can in a sense generate as sophisticated behavior as anything.

We don’t need all that sophisticated engineering, and all those millions of gates in the CPU chips of a practical computer. To get universal computation, we just need that little rule there.

Which means we can expect to see sophisticated computation going on all around us in nature.

Here’s one consequence of that, and of the Principle of Computational Equivalence. When we look at something like rule 30, we are in a sense in a competition. It’s following its rules to make the behavior it makes.

And we’re trying to use our brains—or our statistics, or mathematics, or whatever—to try to decode what it’s doing.

And in the past we might have assumed that we’d always be vastly more sophisticated than it, so it’d be easy for us to decode it. But what the Principle of Computational Equivalence tells us is that that’s not true. Really, that little rule 30 is computationally just as sophisticated as our brains, and every kind of analysis we can do. So we can’t ever expect just to systematically decode what it’s doing. And that’s in a sense why it looks so complex to us.

Well, there’s an important consequence of this for the way we do science, and in general the way we think about things. It’s all tied up with a phenomenon I call computational irreducibility.

In traditional exact science, the big thing is computational reducibility: being able with our science to do computations that let us predict what systems will do, without doing all the computation they do. Like an idealized Earth orbiting the Sun, where we don’t have to trace a million orbits to find where the Earth will be a million years from now, we just have to plug a number into a formula, and immediately we can get the answer.

Well, out in the computational universe there certainly is computational reducibility. Like here, we can very easily predict what will happen in the end.

Graphics:images/thumbnails/Slide021_t.gifEnlarge Image

And even here, it’s not too hard to do that.

Graphics:images/thumbnails/Slide022_t.gifEnlarge Image

But what about a case like this?

Graphics:images/thumbnails/Slide023_t.gifEnlarge Image

How can we see what will happen? Well, I think it’s pretty much computationally irreducible. The only way to find out what will happen is essentially just to run the rule, and trace each step.

The kind of reducibility that traditional exact science relies on just isn’t possible here. We can’t find formulas for what will happen. We need to build our science in a different way.

In practice, it means we have to do simulation. We’ve always known that that can be convenient; this shows it’s fundamentally necessary. And tells us we have to be sure to have the best underlying models so it’s as efficient as possible.

Well, OK, so how do we apply what we learn from exploring the computational universe? How do we build on the basic science of the computational universe, and apply it?

Well, one important way is by using what we see in the computational universe to understand what we see in the physical universe, in nature. And it’s really remarkable, once one starts using simple programs instead of traditional equations, how easy it becomes to make models of lots of systems in nature.

I mean, when we look at the natural world and see all that complexity in it, it seems like nature has some special secret that lets it make all that complexity. But what we’ve discovered with rule 30, and with our explorations of the computational universe, is that, no, actually in the computational universe, complexity is quite easy to make. It’s just that our traditional science models, by being set up to use all our usual mathematical and so on methods, have been set up to avoid it.

But now that we understand the core mechanism, we can start using it to make models of all sorts of things that have seemed very mysterious before.

Like here’s a simple example from physics: snowflake growth.

Graphics:images/thumbnails/Slide024_t.gifEnlarge Image

Well, a crystal grows by successively adding pieces of solid. And we can capture that by a simple two-dimensional cellular automaton. Say just adding a black cell, for solid, next to every black cell that’s already there. Well, if we do that, say on a hexagonal grid, here’s what we get.

Graphics:images/thumbnails/Slide041_t.gifEnlarge Image

It’s like a simple faceted crystal. But in snowflakes there’s another phenomenon going on. There’s a growth inhibition. Every time a piece of ice forms on the edge of the snowflake, that release a little latent heat. Which stops ice forming right there on the next step. Well, if we just put that into our rule, here’s what we get.

Graphics:images/thumbnails/Slide025_t.gifEnlarge Image

It’s awfully like a real snowflake.

Graphics:images/thumbnails/Slide026_t.gifEnlarge Image

It really seems like with our very simple model we’ve captured the essential mechanism that gives snowflakes the complicated shapes they have. And the model has all sorts of predictions, like that big snowflakes have little holes in them. Which indeed they do.

But, of course, like any model, it’s an idealization. It captures some aspects of snowflakes really well. But others it idealizes away.

So if one wanted to know precisely how fast each arm grows at a certain temperature one should use a traditional differential equation. But if one wants to know something more complicated—say, what the distribution of shapes will be in a population of snowflakes—well, then our little model is what we should use.

And you see the same kind of thing all over the place. If you’ve got a phenomenon that’s in a sense simple, and can, for example, be characterized just by giving a few numbers, well, then the traditional math approaches will probably work pretty well. But if it’s more complex, well, then all that other stuff that’s out there in the computational universe is what’s really relevant to what’s going on.

Well, in the effort to do modeling, probably the biggest prize is the universe itself. Can we find a simple model that describes our whole universe?

With our traditional intuition, we probably wouldn’t think so. I mean, how could all this complex stuff we see ever all come from just a simple rule? And in the history of physics, it’s always seemed like when we go to another level of smallness and accuracy in describing the universe, the theories get more complicated. So how could it ever end?

Well, exploring the computational universe really changes our intuition. Because all over it we see really simple rules that turn out to do arbitrarily complex things.

Now in a sense we already know that the rules for our actually universe aren’t arbitrarily complex. Because if, for example, there was a different rule for every particle in the universe, there’d be no order at all. No possibility of doing science.

But just how simple might the rules be? Are they like a computer operating system: a million lines of code? Or a thousand lines? Or three lines?

If they’re simple enough, a remarkable thing becomes possible: we can imagine just searching for them. Searching through the computational universe to see where out there is our actual universe.

Well, OK. That’s easy to say. The technicalities of actually doing it are something else. One has to go in a sense, below space and below time. I think a good way to represent things is in terms of a network, that in a sense just says how what emerges as space is connected together.

Graphics:images/thumbnails/Slide027_t.gifEnlarge Image

With particles, and electrons and things, being little knot-like things in this network. And with time being related to rewriting the network according to certain rules.

So here’s how it looks at the front lines of the universe hunt.

Graphics:images/thumbnails/Slide028_t.gifEnlarge Image

Each line shows what happens with a different rule. Sometimes it’s pretty trivial, and we can tell what we’ve got isn’t our universe. Because, say, there’s no notion of space. Or time ends almost immediately.

But only hundreds of rules in, we’re already seeing all sorts of complicated stuff going on.

So can we tell if we’ve seen our universe yet? Well, computational irreducibility really bites us. Because to find out what these little artificial universes do, we might have to trace them for as many steps as our actual universe has had.

And to do better than that, we sort of have to automate the whole history of physical science. We have to have Mathematica just pick up one of these rules, and automatically deduce the kinds of overall laws it’d give for the universe. Automatically prove all the theorems about what the rules will do, and so on.

Well, we’ve been making good progress at this. In fact, we already know some exciting things, about the way relativity theory, and gravitation, and some pieces of quantum mechanics, can come out of incredibly simple rules. And I think we’ve got good evidence that somewhere out in the computational universe is our universe.

But how close is it? How simple are its rules? We just don’t know. We don’t know if Occam’s Razor really operates. But if it does, then I think it’s pretty likely that in a limited number of years we’ll actually be able to, sort of, hold in our hands the ultimate precise rules for our universe. Which’ll be pretty exciting.

Well, OK. I said I’d talk about biomedicine. So what does the science that comes from studying the computational universe have to say about biomedicine?

Biological systems are often held up as the highest example of complexity we know. And we’ve gotten the idea that somehow they got all that complexity through a long and arduous process of evolution and natural selection. In a sense that’s seemed right, because it seems like to get their complexity must have taken lots of effort.

But what we’ve seen in the computational universe, we now know it doesn’t need to be that way. It can actually be very easy to get complexity.

So maybe that’s how biology does it.

Well, a long time ago I started looking at different examples of what one might call morphological complexity in biology—places where the forms and patterns of biological organisms are complex—and asking how they were made.

And sure enough, the essential rules involved actually often end up being incredibly simple.

Let me show perhaps my all-time favorite example. It’s the pigmentation patterns on mollusc shells.

Graphics:images/thumbnails/Slide029_t.gifEnlarge Image

If you just saw this pattern without knowing anything else, you’d probably assume that it must come from a complex process. Perhaps carefully developed and tuned for the fitness of the mollusc.

But that pattern looks awfully like something like rule 30.

And actually, the way the mollusc grows is just like a one-dimensional cellular automaton. It spirals outward, and there’s some soft tissue that goes over the lip of the shell, with a line of cells in it that lay down pigment.

Well, now you might ask what rules those cells use to determine whether to secrete pigment or not. How might we use our science to work it out?

Well, here’s a first cut. Let’s just ask what all the possible rules are. Here’s the simplest set.

Graphics:images/thumbnails/Slide030_t.gifEnlarge Image

You see a fair amount of diversity, though there are definite classes. But which ones are actually used by molluscs?

Well, here’s the remarkable fact. If one looks at the molluscs of the Earth, it seems like all these classes of behavior are sampled.

Graphics:images/thumbnails/Slide031_t.gifEnlarge Image

There are simple patterns, like stripes and spots, which lots of simple models might have predicted. But then there are also these completely unexpected complicated patterns, that might seem to sort of come out of nowhere.

But actually, we know that they’re just patterns from the computational universe, that one would quickly come across if one just sampled what’s out there.

So here’s the remarkable picture that emerges. It seems that far from these patterns being somehow carefully and elaborately developed, they’re just found almost at random from the computational universe. It’s as if the molluscs of the Earth just sample the rules in the computational universe at random, then run them and print the patterns they produce on their shells.

Well, if that’s right, it means we can in a sense do predictive biology. We’re not stuck having to say that current molluscs are the result of all sorts of historical accidents that we’ll never have a theory for.

Just by knowing abstractly about what’s out there in the computational universe, we can instead immediately make statements about what should exist in the biological space of molluscs.

Well, I’ve found other examples where one can do this too. A simple one is shapes of mollusc shells, where differential growth can lead to all sorts of different final shapes.

Graphics:images/thumbnails/Slide032_t.gifEnlarge Image

But where it seems that among the molluscs of the Earth one essentially samples all reasonable choices of underlying parameters.

A more ambitious example is leaf shapes.

Graphics:images/thumbnails/Slide033_t.gifEnlarge Image

There’s obviously lots of diversity in these. Sometimes they’re smooth, sometimes pointy. Sometimes simple, sometimes complex. But once again, once one starts thinking in terms of simple programs, one sees that there can just be one basic mechanism—one basic rule—behind all this.

It just involves successive repeated branching. And as one changes the details of the rule, one gets all these different forms.

Graphics:images/thumbnails/Slide034_t.gifEnlarge Image

Which seem to do really well at covering the actual forms one sees in leaves.

And given the basic model, one can start to predict what all the possible forms of leaves should be. Here’s one slice through the space.

Graphics:images/thumbnails/Slide035_t.gifEnlarge Image

And we’ve gradually been gearing up to the project of systematically collecting images of all the leaves of the Earth, and seeing how they fit in here.

I don’t know for sure, but I’ll be surprised if we don’t see the same thing as with the molluscs. That somehow the leaves of the Earth just sample all the possible cases of a particular kind of simple program.

And this kind of thing has important implications for the way one thinks about doing biology. First, it lets one imagine a predictive biology. A biology where one doesn’t just say that things are the way they are because of the details of history. But where one can say that things are the way they are because that’s the way that the abstract formal computational universe is. And where just by studying that formal abstract universe one can say things about what one should see in biology.

I have a project I’m hoping to do, about skeletons. We’ve studied them a bit. And it seems that again there are definite simple models for the forms that can grow in a skeleton.

Well, but if one can get the model straightened out then one can start asking what the possible forms of skeletons are. Well, some of them will be sort of expected—kind of the thing one sees in natural selection all the time. With one bone longer, and another one shorter, and so on. But—just like in the case of mollusc patterns and leaves—I expect that there’ll be some surprises. Cases one wouldn’t expect. Weird things. Like the stegosaurus.

We’ll see when the project gets done. But it’ll be fun if it can happen ahead of the paleontology. So one can actually predict the weird forms of dinosaur that one might find. Not just looking for the traditional “missing links” of evolutionary biology, that obviously smoothly interpolate between the forms one’s found. But instead things that correspond to links in the computational universe: programs that are near what one’s found, but whose behavior may be very different.

And, you know, from the point of view of experimental biology and biomedicine I think there’s an obvious implication here. Often, the old textbooks would say that there’s one or two variants of some particular type of cell.

But now, with better techniques and analysis, it’s pretty common that there end up being more and more new variants being discovered. Often a bewildering collection of them, with all sorts of different specific features.

So the question is: what’s going on with them? Well, looking at the examples we’ve just seen, there’s an obvious hypothesis: perhaps all these different variants just correspond to different programs sampled from some small corner of the computational universe. With traditional intuition, one might assume that all the different forms one sees would have to have been created in complicated, qualitatively different, ways. But what we’ve learned from studying the computational universe is that that’s not the case.

Instead, all these forms can just be simple programs that differ in systematic simple ways.

Well, OK. But that raises an important basic question. Are the programs that are relevant in biology actually simple?

There’s 3 billion base pairs in the human genome—sort of around the same amount of source code as is in Mathematica, or in an operating system.

But if we just look even at the macroscopic forms that biological systems generate, there’s often great regularity in them.

Graphics:images/thumbnails/Slide037_t.gifEnlarge Image

Many of these really do seem like they should be governed by simple programs, even though they involve billions or trillions of cells and everything.

But what about the more complicated forms? With traditional scientific intuition, one might think that these must have some complicated origin. But from what we’ve seen in the computational universe, we know this doesn’t have to be true.

And although we don’t know all the details in all cases, my guess is that in the vast majority of cases that’s what’s going on: that the basic rules, the basic mechanism, that’s at work is very simple.

One way to tell that is to make models. But how does one do that?

Well, making models is always a difficult thing. And traditionally what one’s typically had to try to do in these kinds of biomedical cases is to come up with a model that’s really close, and then try to tweak its parameters by using statistics. And if the behavior or form one’s studying is simple enough, that can sometimes work. But as soon as it’s complex, the sort of calculus idea of steady simple incremental improvement isn’t likely to work.

So what can one do then? Well, if one’s getting the model from the computational universe, and the model is simple enough, there’s that other outrageous possibility: one can just start enumerating all possible models, systematically searching for one that fits. Now in practice one has to start with kind of the right raw material. And our methodology for doing these kinds of models searches is gradually getting better.

But it’s remarkable how often just doing such a search gets one to really good models. And when what one’s modeling is complex, it tends to actually be rather easy to tell if a model is good. One’s not just trying to fit one or two numbers, which might just sort of happen to come out right.

One’s trying to fit a whole structure, and one can usually readily see if it’s right or not.

Now, it’s the nature of models that they’re just abstract idealizations of a system. Sometimes they actually “work inside” like the system itself. Have the same little levers and switches. And sometimes they’re just explanations and descriptions of what the system does, like the mathematical law of gravity.

And even if a model doesn’t work inside like the system it’s modeling—which is usually what happens in physics—the model can still be very useful. Because, for example, it can predict what the system will do: what will happen when it grows, how it will react to outside input, what kinds of forms it can have.

And perhaps it can show you a different way to achieve the same results as the system. A way to grow a structure that’ll be functionally like some piece of biological tissue. Or operate like some biological control system.

Of course, sometimes the model may actually work inside like the system it’s modeling. It may give one clues about what the actual levers and switches inside are.

And it’s been remarkable over the years to watch all those genetic regulatory mechanisms and signal pathways of lots of biological systems get discovered. Because over and over again biological phenomena that seemed at least somewhat complicated have ended up getting explained by what amount to rather simple programs.

Mostly one hasn’t really built up to even the kind of morphological complexity that one sees in a snowflake. But it’s very encouraging that when one does find the mechanisms inside biological systems, they’re often simpler than one ever imagined. It kind of suggests that it makes some sense to look at simple programs when one’s thinking about biology.

And often in biology, when one finds the simple mechanism, it seems really clever, and tricky. Well, that’s the story of the computational universe too.

Innumerable times I’ve said to myself: there just can’t be a program of such-and-such a kind that does such-and-such a thing. But then I go and explore. And lo and behold, I find something. Bizarrely simple. And tricky. And achieving what it does in a really clever and unexpected way.

Graphics:images/thumbnails/Slide038_t.gifEnlarge Image

Well, OK, so what does this mean in general for thinking about biology? It’s interesting to look a bit at the history of how people have thought about science in general, and in particular about mechanisms for things. There was a time when in physics, for example, people assumed that there had to be an explicit mechanical explanation for everything. That nothing would move unless something pushed something which pushed something, and so on.

Well, then along came Newton and calculus and all that math. And there started to be explanations—mechanisms—that just involved saying that there was a mathematical inverse square law, and that’s just the way it is. Without anything explicitly pushing anything. Well, in biology the history has often been that there isn’t much thought about the mechanism at all. There’s sort of an assumption that there can’t be much of a theory. And all we can do is to look at what happens, and then say what the statistics of different outcomes are and so on.

But with the advent of molecular biology, a lot of that has changed. And now we’re finding some level of mechanism all the time. If we want to know how something happens, we look to see if there’s a particular molecule that’s involved. Or perhaps a particular direct interaction: binding of one molecule to another, and so on.

And of course there’ve been wonderful things figured out that way. And every year one gets to draw bigger and bigger systems biology networks, showing in a chemical sense how this thing pushes this thing which pushes this thing.

But that’s the kind of explanation that one’s normally dealing with. Perhaps there’ll be a feedback loop or two. But mostly it’s in a sense very direct stuff. The direct analog for chemical concentrations of the fact that if you push down one end of a lever, the other end will go up.

And no doubt quite a lot of important things in biology work that way. And it’s really important to find out those things. Particularly since those are things that it’s probably quite easy for us to modify with present-day engineering-style methodologies: because we can know that by putting in some other molecule—some drug—we can push the lever that was down, up, and so on.

But what about all those systems in biology that just seem really complicated right now? How should we think about those?

We can draw the detailed wiring diagrams. But are those really the point? I mean, if we don’t know what the overall architecture is, it’s really difficult to know just what aspects of the wiring we have to get right. Does it matter whether the molecules are spatially uniform in the cell? Does it matter what the time sequence of processes is, or only what the final minimum energy state is?

And of course, in the history of biomedicine there’ve been lots of times when there’s been a crucial realization about mechanism, that really changed how one thought about things.

Like when the circulation of the blood was discovered, or when the idea of feedback control systems was recognized. And of course, another example from fifty years ago was genetics. Where there was all sorts of data, and it seemed like there were lots of different complicated mechanisms at work. But where eventually along came the idea of digital information, and the genetic code in DNA. And then suddenly all those different complicated mechanisms fitted into one overall architecture, one framework from which to understand what’s going on. Which of course has all sorts of detailed modifications, but gives us a fundamental basis that has now come to dominate biomedicine.

Well, OK, as we look at what’s known about all those systems biology networks, or about the phenomena of cellular differentiation, or aging, or cancer, or neuroscience, or immunology, we might wonder: is there going to be some overarching new framework that we can fit the complexity we see into?

Well, I don’t know for sure. But my biomedical friends are always encouraging me to think that my science just might provide that framework. And since my book came out four years ago, there’s been all sorts of interesting work done by all sorts of biomedical researchers. Particularly on finding simple programs for all sorts of growth processes: of capillaries, or bone, or embryos. But also on looking at the process of folding for proteins or RNA, and the kinds of forms that can be built from them.

What will come from all of this? Well, first, I’m guessing that we’ll understand a lot more about the diversity of cellular processes.

Maybe we’ll get an atlas of cellular physiology where we’re mapping the computational universe onto the diverse actual forms of cells. When we find the human genome, we’re kind of finding the raw bits of the machine code for the system. But perhaps we can find a map for the algorithms. Some kind of “algomap” that shows us the different overall rules for how cells and so on are built, and operate.

In systems biology, we’re finding out lots of little subroutines for how parts of cells and so on operate. But most of our methods for finding those flowcharts for how the subroutines operate rely heavily on the fact that they can be represented like simple networks or simple block diagrams. When we look for simple correlations in microarray data, or whatever, the only programs we can discover are ones that operate in rather simple ways. In the “push one side of a lever up, and the other side goes down,” kind of way.

And if biology is using other kinds of programs, well, then we need other methods. But from understanding what’s out there in the computational universe, we can get some ideas about what those methods should be. About searching the space of possible programs to find which ones fit, and so on. The details are quite different, but it sort of qualitatively reminds one of shotgun sequencing. It’s kind of outrageous, and it needs lots of computer time. But once it gets an answer, the answer is really useful.

You know, when we look at mechanisms in biology, there’s an interesting issue: about whether the states of biological systems are somehow encoded in their static structure, or are somehow dynamic.

If a biological entity is in such-and-such a state, is that because the level of some particular molecule is higher—because some particular gene is switched on, or something? Or is it something much more dynamic?

In the current molecular-biology-dominated world of biomedicine, one’s always looking for the crucial molecule, or the crucial gene. But what if what’s actually going on is that there are lots of molecules, and lots of genes, and their interactions correspond to some type of program that somehow dynamically can behave in different ways.

Let’s look at an example. Here’s one of those simple cellular automata again, rule 110.

Graphics:images/thumbnails/Slide019_t.gifEnlarge Image

Now, most of the time this cellular automaton sort of maintains itself in a simple dynamic equilibrium. It just makes that quite simple repetitive background pattern. But you can see that there are actually little structures running around too.

Now remember that the underlying rules here for every cell are exactly the same. There’s no special feature—no special molecule—that doesn’t exist in the background, but does exist in each structure. It’s all the same stuff. But somehow there’s something dynamically going on that lets those structures be produced.

We won’t discover it or understand it if we just grind the cellular automaton up, and feed its rules to a microarray, so to speak. We have to look at its dynamics, actually operating, to see what’s going on.

Now, you might say, let’s hope that’s not how important processes in biomedicine work. Because if it is, we’ll never be able to figure it out. Well, I don’t think one should be so pessimistic. Two good things are happening. The first is that we have lots of techniques coming online that allow us not just to measure a few numbers, but to get huge amounts of continuous data, and for example to get complete imaging of what’s going on. And the second is that we’re beginning to have a theoretical framework, a conceptual framework, for thinking about the essence of dynamic processes. With our simple programs, we have a chance of capturing the essential mechanisms, and being able to work things out, without having to trace every detail. We’re going to get an idea of what we should actually be measuring to know what’s going on, and what’s just sort of an inevitable consequence of the core mechanisms.

So that instead of saying that there’s such-and-such a gene switched on, at such-and-such a place on the genome, we’ll be talking about how there’s such-and-such an algorithm operating, at such-and-such a place in the human algomap, or whatever. And from knowing about the features of the computational universe we’ll be able to say, for example, yes, that’s a stable algorithm, or no, that algorithm has a chance to do lots of very different things.

We’ve known about some biological algorithms for a while. Like in the endocrine system, where we have differential equations for perhaps three or four or more interacting levels. Or in cardiology, where we have increasing understanding of the differential equations for heart tissue.

But these are all very traditional physics-and-math kinds of algorithms. They correspond to very specific calculus-like programs. They’re not typical of what’s out there in the computational universe. And they’re probably not typical of what’s out there in biology either. They’re just things that we’ve been able to understand with our existing methods.

Well, then there are areas like neuroscience. Where computational neuroscience has brought in neural net algorithms that are a lot more like things like cellular automata. And indeed a lot of the phenomena I’ve seen in simple programs are perfectly easy to see in model neural networks. In fact, there’d even been computer-like experiments in the 1950s that had seen various kinds of complexity. But nobody had paid attention. They’d just said: well, yes, there’s some kind of noise in the system. But that’s not what we’re interested in. And that’s not what we can analyze.

And indeed the vast majority of what’s been done in computational neuroscience, even today, is about the idea of taking certain input, and getting certain output. One doesn’t look at the dynamic behavior, where is where that “noise” is most obvious.

Well, I suspect that for many purposes that “noise” is the signal. It’s in the dynamics—the operation of the algorithm—that a lot of the important things are going on.

And thinking about simple programs and things points one in different directions in terms of what one might try to study, and measure. Like take vision, for example. We know that there are particular cells in the primary visual cortex that respond to particular kinds of stimuli. And each kind of cell is responding to a different kind of stimulus: center surround, blobs in different directions, and so on.

But if we think in terms of simple programs, what we might expect is that actually there’s cells running all sorts of different programs sampled from the space of possibilities. And some of those programs will be sensitive to simple stimuli, like dots and lines. But others, well others might be like the stegosaurus. They might be sensitive to some elaborate stimulus that we’d never think to try just on the basis of traditional “add up the Gaussians” math.

And if we go in and try to actually look for the appropriate cells, well, we’ll just think that there’s lots of diversity and complexity. And unless we understand sort of the scope of the computational universe, we won’t be able to see that there’s an overarching organization to them all.

You know, there are lots of areas where sort of the architecture of the model is an issue. And there are two themes that seem to come up a lot. They’re actually not really logically connected. But they’re both things that come from thinking about the computational universe. One is the idea of great diversity—say in the structure and operation of cells—coming from sampling similar programs in the computational universe. And the other is the idea of complicated dynamical processes going on in the execution—the operation—of those programs.

One place that I’ve looked at a little bit where both of these seem to play a role is in the immune system. There are big questions there, like where immune memory really resides, and what leads to autoimmunity, and why some kinds of autoimmunity are more common than others. And the instinct is always to look in a sense for simple static explanations. Simple static mechanisms. To say that the immune memory resides in particular cells that have some particular effect, and so on.

It’s an old idea that perhaps instead there’s some kind of dynamic network of antibodies, and anti-antibodies, and so on, and that that’s an important part of what’s happening. That idea is definitely out of favor. And instead there are supposed to be particular types of cells, all with their particular roles, and so on. But now there are all these different kinds of T cells and so on known. And it’s all becoming quite bewildering. Well, I kind of think it’s likely that really there’s an overarching architecture, where all these different kinds of cells are coming out from different programs, being built by different programs, and operating according to different programs. The immunoglobulins and T cell receptors give us different particular shapes. But there’s also diversity in the programs that the T cells are running.

And perhaps there’s something in the abstract behavior of these populations of possible programs that’s an essential mechanism for the operation of our immune systems. Not just clonal selection, but the complete control system.

I’m not sure. But I think there are some interesting possibilities. And perhaps we’ll know before too long. Because when sequencing gets cheap enough we’ll be able to go in and just start sequencing the T cell populations and so on in a person. So we can find out just how many of the trillions of possible T cells actually exist in a given individual. So we can begin to build up sort of an architectural picture of how the system operates.

But back to themes. I mentioned complex dynamic behavior, and diversity in behavior and structure.

Another big theme has to do with randomness. If one looks at current biology, there are occasional uses of calculus. There’s quite a bit of use of the idea of digital information. But there’s a lot of use of statistics.

There are a lot of times in biology where one says “there’s a certain probability for this or that happening. We’re going to make our theory based on those probabilities.”

Now, in a sense, whenever you put a probability into your model you’re admitting that your model is somehow incomplete. You’re saying: I can explain these features of my system, but these parts—well they come from somewhere else, and I’m just going to say there’s a certain probability for them to be this way or that. One just models that part of the system by saying it’s “random”, and one doesn’t know what it’s going to do.

Well, so we can ask everywhere, in biology, and in physics, where the randomness really comes from in the things we think of as random. And there are really three basic mechanisms. The first is the one that happens with, say, a boat bobbing on an ocean, or with Brownian motion. There’s no randomness in the thing one’s actually looking at: the boat or the pollen grain. The randomness is coming from the environment, from all those details of a storm that happened on the ocean a thousand miles away, and so on. So that’s the first mechanism: that the randomness comes because the system one’s looking at is continually being kicked by some kind of randomness from the outside.

Well, there’s another mechanism, that’s become famous through chaos theory. It’s the idea that instead of there being randomness continually injected into a system, there’s just randomness at the beginning. And all the randomness that one sees is just a consequence of details of the initial conditions for the system.

Like in tossing a coin.

Graphics:images/thumbnails/Slide377_t.gifEnlarge Image

Where once the coin is tossed there’s no randomness in which way it’ll end up. But which way it ends up depends in detail on the precise speed it had at the beginning. So if it was started say by hand, one won’t be able to control that precise speed, and the final outcome will seem random.

There’s randomness because there’s sort of an instability. A small perturbation in the initial conditions can lead to continuing long-term consequences for the outcome. And that phenomenon is quite common. Like here it is even in the rule 30 cellular automaton.

Graphics:images/thumbnails/Slide215b_t.gifEnlarge Image

But it can never be a complete explanation for randomness in a system. Because really what it’s doing is just saying that the randomness that comes out is some kind of transcription of randomness that went in, in the details of the initial conditions. But, so, can there be any other explanation for randomness? Well, yes there can be. Just look at our friend rule 30.

Graphics:images/thumbnails/Slide016_t.gifEnlarge Image

Here there’s no randomness going in. There’s just that one black cell. Yet the behavior that comes out looks in many respects random. In fact, say the center column of the pattern is really high quality randomness: the best pseudorandom generator, even good for cryptography.

Yet none of that randomness came from outside the system. It was intrinsically generated inside the system itself. And this is a new and different phenomenon that I think is actually at the core of a lot of systems that seem random.

Like in physics, for example. Say in turbulent fluid flow.

Graphics:images/thumbnails/Slide305_t.gifEnlarge Image

Now, it could be that all that randomness in a fast-moving fluid comes from randomness in the environment, or from details of the molecules underneath and how they were arranged at the beginning. But I don’t think so. I think a lot of the randomness is intrinsic randomness. Like in rule 30.

Well, that has an important consequence. Because if the randomness comes from the environment, then we can expect that every time we do the experiment, it’ll look different. We’ll get a different result. But if the randomness is intrinsically generated, like by something like rule 30, then it’ll be repeatable—it’ll be the same every time one runs the experiment.

Graphics:images/thumbnails/Slide016_t.gifEnlarge Image

Well, OK, so what about in biology? Where do all those probabilities we’re using come from? Are they reflecting the effects of details of the environment affecting the systems we’re looking at? Or are they reflecting the fact that we just haven’t yet understood the internal mechanisms well enough?

Well, in a sense a lot of the practice of medicine assumes that the environment isn’t that important. Because if it was there’d never be any predictability in the history of a disease.

But the question is: when things seem random in the progress of a disease, is that randomness somehow fundamentally unknowable and unpredictable, or is it just that we’re seeing the operation of a program whose behavior is complex?

Now, a lot of things could happen; that might mean that we’d be choosing between lots of different programs. And it might be hard to figure out what program we’re on. But perhaps if we knew that—if we knew where in the algomap we were—then we’d know the outcome. No doubt there could be perturbations, but basically it’d be deterministic. But complicated.

Well, knowing about this would obviously be important for overall medicine. But it’s also important at a more microscopic level.

I mean, when we make medical devices, or even drugs, we’re normally operating with, in a sense, very simple processes. We’re trying to move the level of something this way or that. Block certain binding sites. Whatever.

But we know that it’s pretty common in biology, whether in heartbeats or in the immune system, to use randomness to make things more stable. Because when things are precise, they can be brittle. But when there’s randomness, the overall behavior can be more robust. And our intrinsic deterministic randomness works just fine for that too.

But particularly if there’s that kind of thing going on, then our interventions have to take account of that. We actually have to have our drug molecules—or our nanobots, or whatever—run algorithms that link into this.

Right now, say in nanotechnology, we tend to try to engineer things by taking versions of large-scale machines and devices and shrinking them down very small. And often it’s a big challenge to be able to do that successfully. But from exploring the computational universe we get a different idea.

We get the idea that perhaps we should stick with whatever molecular or other components we can easily get our hands on. But then we should search the computational universe to find programs that can work with those components to do the things we need.

And the remarkable point is that even when the components are simple, we’ll often be able to find them. Like in rule 30, or rule 110, but now made of molecules.

And that gives a new way to imagine making medical objects. Kind of using the same trick as nature. Just mining the computational universe, and recruiting for our purposes things out there that turn out to be useful.

Well, one of the things that happens when we’re kind of sampling the computational universe is that it’s inevitable that we’ll run into computational irreducibility. We may know the deterministic rules for this or that system, but we may not easily be able to predict what the system will do. In fact, we won’t be able to figure that out by any procedure much simpler than just running the system—or a model of the system—and seeing what happens. Now of course, in practice, if we have an efficient model, we can outrun the actual system by doing an actual simulation on a computer.

But we can’t expect to be able to foresee ourselves what will happen. We’ll have to do an irreducible amount of computational work to do that.

Well, so what does this mean for biomedicine? It’s a little like what it means for practical computing. Given a program, it’s hard to foresee what it will do. And that’s the fundamental reason it’s hard to see if programs have bugs.

People don’t have the intuition that programs should do complicated things. So they don’t imagine that there’ll be bugs, but in fact there are. Because that’s just what happens out in the computational universe.

Well, in practice, if we think about the analog of practical computer systems, medical systems are probably quite similar in many ways. There are probably core algorithms, that I suspect are sampled from the computational universe. Probably a little bit harder to understand than the core algorithms of our typical computer systems, because they were built by search, not engineering.

But then these core algorithms are hooked together in a giant piece of spaghetti code; all sorts of interlocking pieces cobbled together—in big computer systems, and in humans.

So, well then, what’s involved in debugging all of this? It’s kind of interesting to see the analogies between medical diagnosis and computer system diagnosis. You see certain symptoms. And if you’re experienced, you know that there are certain probabilities that those symptoms come from this or that.

Well, in computer systems we’re a little ahead of medical systems in some ways, a little behind in others. There’s many fewer “clinical trials” — much less statistical outcomes data. But it’s much easier to capture all the raw data about what’s going on in a system. And being able to run tests on subsystems and so on. And increasingly, computer systems are starting to “learn”; to keep a complex memory of their previous experiences. And of course, increasingly we can’t ever expect just to turn off the system, and try isolating parts and so on. It’s not as difficult to reboot as it is in a medical system, where probably the only way to do that is to wait for another generation to be born. But it’s getting close.

You know, I think computational irreducibility is something that has consequences both in computer systems diagnosis, and medical diagnosis. The issue is: given some symptoms that one’s seeing, will they be important or not? Will the pattern that’s produced just peter out, or end up being something critical?

Well, computational irreducibility makes it hard to answer that in both cases.

You know, as we think about both kinds of diagnosis, one of the issues is: how many different things can go wrong? In medical diagnosis, we have a fairly well developed calculus of diagnosis codes and so on. We don’t really have that for computer systems diagnosis, that at the same level we probably could.

But we know in computers that there are just a huge diversity of possible bugs. And probably that’s true in medical systems too.

If one changes some of the programs one’s using, there’s a huge diversity of different things that can happen. That’s what we’ve seen when we look out in the computational universe.

Now, of course, one of the big issues is what counts as something being wrong? How do we know what’s a bug, and what’s a feature? It’s hard to know in computers. I think it’s sometimes hard to know in medicine too.

And even when one thing is wrong, can we in a sense prove a theorem that says it won’t affect other things, or will?

Well, another issue is: OK, we’ve found something we think is wrong. Now, what can we do about it? How can we fix it?

Well, in the past we might have assumed that to interact with the big complicated system that is human physiology we’d inevitably have to build very, very complicated systems. Full of all sorts of special cases, with millions of lines of code, and so on. But one of the great lessons of studying the computational universe is that it doesn’t need to be that way.

By searching the computational universe, we can start to find programs that interact with big complicated systems, and do good and useful things.

We’ve seen this now in several areas of technology, and software. And I think it’s going to be true in biomedicine too.

And that’s important in practice. Because if it took a million lines of code to figure out what to do in a particular disease, we could never expect a little drug molecule to be able to figure it out “in the field”.

But if there’s a simple program that will work, well, then perhaps we can implement it in a molecule, or at least a very small nanoobject, and have the object itself do the logic of figuring out how to respond to a situation. Rather than us having to use a big blunt instrument that bashes the whole system.

It’s interesting to imagine how this is all going to change what happens in biomedicine. It’s all going to have to become more dynamic, more computational.

When there’s universal computation and computational irreducibility, it’s not going to be possible to consistently “name” discrete diseases. Each “disease” in each individual will correspond to the execution of some program. And there’s in a sense infinite diversity in what can happen, which one often won’t be able to characterize well enough by talking about discrete symptoms. Instead, one’ll have to start thinking about the building blocks of the program itself.

And to combat the disease, one’ll have to do computations—in effect trying to outrun the computations that are going on in the disease itself.

So it’ll not just be a question of finding the right drug, that binds to the right thing in the right way. One’ll have to treat things in a dynamic, computationally active way. Actually having computations going on in the devices, or nanomachines, or molecules, that are interacting with our physiological systems.

Well, there’s a lot more to say, I think, about how the computational universe will interact with the biomedical universe. And of course I certainly haven’t figured it all out. I think we’re going to be hearing a lot more about it in the years to come. But I thank you today for giving me the opportunity here to think a little more about it, and move at least my thinking forward a bit.

Publications by Stephen Wolfram »