The Past and Future of Scientific Computing (2003)

Videoconference talk at the International Symposium on Modern Computing: in Celebration of John Vincent Atanasoff’s 100th Birthday
Iowa State University, October 31, 2003

Thank you for inviting me to this event. I’m pleased to be with you this evening.

Well, I’ve been asked to say a few words about the history and future of scientific computing. And it’s fitting to be talking about scientific computing here. Because, as I’m sure you know, the original reason John Atanasoff started building a computer was to be able to do scientific computing.

I was just looking at his paper from 1940 this afternoon. It’s entitled “Computing Machine for the Solution of Large Systems of Linear Algebraic Equations”. He wanted to make a computer system that’d solve what are called linear equations fast. Well, as I was looking at his paper I realized a really weird thing. Here’s an ad that we just made this month about the latest version of Mathematica.

And look what it’s talking about: solving linear equations! That we’ve been able to make Mathematica solve linear equations really fast!

Well, in 1940 Atanasoff estimated that with hand calculation it would take at least 125 hours to solve a system of 20 linear equations. And he thought his computer would be able to cut that time down to about 12 hours. Which was pretty impressive.

But now sixty-three years have passed. So what’s happened? Well, here’s the answer: now Mathematica can solve more than a thousand linear equations in a small fraction of a second. So there’s been some progress. And it’s kind of interesting to see the thread of linear equations go so directly through the whole of the last sixty-three years of scientific computing.

But I want to talk here more about the general picture of scientific computing. About where it’s been, and where it can go now.

It started pretty slowly. I mean, nearly 400 years ago the very first digital mechanical computer we know about was built to do science—to help Kepler with his computations on the orbit of the moon. And by the end of the 1800s people were pretty routinely using mechanical calculators in doing scientific work. But they weren’t really doing anything they wouldn’t otherwise have done by hand, just a little more slowly.

But by the late 1940s—notably after Atanasoff’s work—there were starting to be digital electronic computers. And not only were they fast. They also didn’t just do individual pieces of arithmetic. They let one set up whole sequences of calculations.

Well, in science the place this got used most was in physics. And the main idea was to take calculations that had been done by hand before, and to automate them with computers. Well, there were quite a few places where this worked well—notably in ballistics and simple structural and fluid mechanics. But often what really was going on—just as Atanasoff had said—was that one was solving linear equations.

Well, it had been known since the late 1800s that many important equations in physics were nonlinear. But those equations had been pretty much impossible to solve by traditional hand methods. But the late 1940s quite a few famous names in physics and mathematics were beginning to think that perhaps computers might be able to do it—and to crack lots of fundamental problems in physics.

Well, starting in the 1950s various groups tried to do it. And here’s the bad thing: it basically just didn’t work. Or at least it didn’t seem to work. People would run their programs on nonlinear problems. And their programs would do all sorts of things. But some of those things were definitely not right. And pretty quickly everything got mired in all sorts of details of numerical analysis and so on.

Over in the world of linear problems, things were going quite well. And there were lots of applications to engineering emerging. But to crack fundamental science problems something had to be done on the nonlinear side. And that just wasn’t working. And actually that failure—back in the 1950s and early 1960s—cast a long shadow over scientific computing. It made people think that, yes, scientific computing could be useful for complicated, messy, systems like the ones in engineering. But that when it came to truly fundamental questions in science, one just couldn’t expect to make reliable progress with computers.

And when I first got exposed to scientific computing—as a teenage theoretical physicist in the mid-1970s—that was almost the universal view. People always told me to stay away from scientific computing: that that wasn’t something a respectable scientist should be doing.

Well, the first thing that made me not take their advice was that I realized one could use computers not just to do calculations with numbers—but to do algebra. People had experimented with doing algebra by computer since the 1950s. But mostly programs only ever got used by the people who’d written them. Well, I decided to try using the programs just as tools—for doing my theoretical physics. And it worked great. Because, you see, algebra is exact—so you never have to worry about the details of numerical analysis to know if you’ve got the right answer.

Well, then at the end of the 1970s, another thing happened: computer time started getting cheap. When most computers were mainframes, they were getting fed programs to run every moment of every day. But by the beginning of the 1980s, there were lots of minicomputers—that were sitting idle a lot of the time.

Well, if computer time is expensive you pretty much have to stick to using it for things you know are going to work. But if it’s cheap, you can start innovating, and trying different things. And for me that let me start using computers for science in a whole different way.

Well, in the late 1970s my physics had led me into various questions about cosmology. And in particular to the question of how structures emerge in our universe—from galaxies on down. I quickly realized that that was actually an instance of a much more general question: how does anything complicated get produced in nature? There are lots of everyday examples—snowflakes, turbulent fluid flows, forms of plants and animals … and lots of others.

And my first assumption was that with all the sophisticated math I knew from physics and so on, I’d easily be able to figure out what was going on with these ordinary everyday systems. But when I actually tried to do it—and tried to use traditional scientific computing and so on—it just didn’t seem to work. And gradually I started thinking that perhaps there might be a fundamental problem with the whole approach.

Well, if one looks at history, the idea of using math and mathematical equations to understand nature has been sort of a defining feature of the exact sciences for perhaps three hundred years. And it certainly worked out extremely well for Newton and friends in figuring out orbits of comets, and for lots and lots of things since then. And it was the foundation for the traditional approach to scientific computing—to studying linear equations and so on.

But somehow when the behavior one’s looking at is more complicated, none of it seems to work too well. And gradually what I began to think was that that was actually why there had never really been a good theory for complicated processes in nature—in physics, and particularly in biology and so on. And I got to wondering about whether there might somehow be a way to go beyond the usual idea of using mathematical equations as the basis for thinking about nature.

Well, that was around 1981. And it so happened that at that time I had just spent time developing a big software system called SMP that was in some ways a forerunner of Mathematica. And at the core of SMP was a computer language. And what I’d done to design that language was somehow to try to think about all the computations that people might want to do—and then to try to identify primitives that could be strung together to build up those computations.

Well, that had worked out fairly well—and in fact in a much-evolved form I think it’s now worked out spectacularly well in Mathematica. But back in 1981 I had the idea that perhaps—just as I’d been able to find primitives for computations people want to do—I might somehow also be able to find primitives for what nature does.

And the crucial thing I realized is that those primitives don’t have to be based only on traditional mathematical constructs. If one’s going to be able to do theoretical science, one has to assume that nature follows some kind of definite rules. But why should those rules involve only the kinds of constructs that have been invented in human mathematics—numbers, exponentials, calculus and so on? Can’t the rules somehow be more general?

Well, in the past there wouldn’t really have been any systematic way to think about such things. But now—thanks to John Atanasoff and many others—we have computers, whose programs in effect implement arbitrarily general rules. And the idea I had was that perhaps the kinds of rules that can be embodied in programs might actually be what nature is using.

But what sorts of programs might be relevant? From knowing about traditional mathematics and scientific computing, I assumed they’d have to be at least somewhat complicated. But still, I figured I should at least start by looking at really simple programs.

In practical computing we’re used to fairly long and complicated programs that are specifically set up for particular tasks. Just as Atanasoff for example put together thousands of elementary logical operations to do linear algebra. But what I wanted to know about were really simple, short, programs—say just made from a few logic operations.

What do programs like that do?

It’s sort of the simplest possible computer experiment: run the simplest programs and see how they behave. Well, back in 1981 I came up with particular kinds of programs to try—that are known as cellular automata, and that have quite a history of their own.

Here’s how a simple one-dimensional cellular automaton works. One starts with a row of cells, each either black or white. [See A New Kind of Science, page 24.]

Then the cellular automaton evolves down the page, with the color of each cell on each step being determined by a definite rule from the color of the cell and its neighbors on the step before. [See A New Kind of Science, page 24.]


Replay Animation

Well, in this particular case the rule is really simple. It just says that a cell will be black whenever it or its neighbors were black on the row before. And what happens is that we just get a simple uniform black pattern. [See A New Kind of Science, page 24.]

Well we can use that icon at the bottom to represent the rule—the program—we were using. So what happens if we change the rule a bit? [See A New Kind of Science, page 25.]

Well now, instead of getting just a uniform black pattern, we get a checkerboard. But so far none of this is terribly surprising. We’re using very simple rules, and we’re getting simple patterns out. Well, let’s try another rule. [See A New Kind of Science, page 25.]

It’s the same setup as before. But what’s going on here? We don’t seem to be getting any kind of simple repeating pattern. Let’s run it a bit longer. Well, it gets pretty intricate. [See A New Kind of Science, page 26.]

But we can see it’s very regular: it’s a self-similar or fractal pattern—just formed from nested pieces. And one might think that if one has a simple rule and starts from just a single black cell then one would always have to get a pattern that’s somehow very regular—like this.

At least, back in 1982 that’s what I assumed was true. But one day I decided to try a very systematic experiment, and just run every single one of the 256 possible simplest cellular automaton rules. And when I got to rule number 30 this is what I saw. [See A New Kind of Science, page 27.]

What’s going on here? Let’s run it some more. [See A New Kind of Science, page 30.]

There’s a bit of regularity over on the left. But otherwise, this looks really complicated—actually kind of random.

Well, when I first saw this, I thought it might just be a problem with our visual system: that really there were regularities, but we just couldn’t see them. So I ran all sorts of elaborate mathematical and statistical tests. And what I found was that no, so far as I could tell, something like the center column of cells really was perfectly random.

Well, this is rather amazing. We have a very simple rule, and we’re starting off from a single black cell. But what we’re getting out is an incredibly complicated pattern—that seems in many ways random. It just doesn’t seem right. We’ve put so little in, yet we’re getting so much out. It’s not what our ordinary intuition says should happen.

I mean, in our everyday experience and, say, in doing engineering what we’re used to is that to make something complicated, we somehow have to start off with complicated plans or use complicated rules. But what we’re seeing here is that actually even extremely simple rules can produce incredibly complicated behavior.

Well, it took me years to come to terms with this phenomenon. And in fact, it’s gradually overturned almost everything I thought I knew about the foundations of science. And it’s led to spend the past 17 years building a new intellectual structure—really a new kind of science. Which among other things gives us a new foundation for what we think of as scientific computing.

Well, OK, so having seen rule 30, what more can happen? Well, here’s another cellular automaton. [See A New Kind of Science, page 67.]

And it’s really very common for them to show complex behavior.

Well, in the mid-1980s I’d studied a bunch of cellular automata, and I’d seen this basic phenomenon. But I wasn’t sure whether it was somehow specific to cellular automata, or more general. And I had the practical problem that there wasn’t any easy way to set up all the very different types of computer experiments that I’d have to do to find out.

You see, I needed in a sense to be able to do all possible forms of scientific computing in a uniform way. To investigate not just what one could do with a specialized numerics package, or a specialized graphics system. But to be able to study everything together.

Well, in fact, that was a large part of why I started building Mathematica. I wanted once and for all to make a single system that I could use to do all the computations I’d ever want. And to do that I had to come up with a framework for unifying all sorts of very different kinds of computations.

It’s a funny thing: even though the ways we use computers have changed incredibly since the 1940s, most of the languages that people use to program have actually kept very much the same form.

Well, one of the things I had to do to make Mathematica was to figure out how to generalize computer languages. I ended up going back to things that had been studied in mathematical logic in the early 1900s and in the theory of computation in the 1930s. And what I was led to is what I call “symbolic programming”.

It’s a much more general, flexible, way of programming. Where one operates not only on the content of whatever data one’s dealing with. But also its structure.

Well, like anything intellectually serious, there are things to learn to be able to do symbolic programming well. But it’s very powerful. And it’s amazing how productive people manage to be with it in Mathematica.

Well, some of what gets done with it is general computation, and IT kinds of things. But a lot is technical and scientific computation. And for me what’s been really exciting is to be able to use the generality of symbolic programming to explore computation in its full generality—and to address foundational questions about for example scientific computing.

You know, when I first started using Mathematica seriously for these kinds of things in the early 1990s, it was tremendously exciting. I guess it was a little like I imagine it must have felt when telescopes or microscopes were first invented. One could just point them somewhere, and suddenly see a whole world of new phenomena—like mountains on the moon, or tiny creatures swimming around in drops of pondwater.

Well, in the computational world, one now just pointed Mathematica somewhere, and suddenly one could see all this amazing stuff. So the first question was: just how special are cellular automata? Well, I looked at lots of cellular automata, and often saw very complicated behavior. [See A New Kind of Science, page 63.]


Replay Animation

But what about programs with different setups? Here’s a Turing machine for example. [See A New Kind of Science, page 78.]

It’s got a row of cells, like a cellular automaton. But unlike a cellular automaton, only one cell gets updated at each step, but that cell moves around. Well, the first quite a few Turing machines only do these kinds of things. [See A New Kind of Science, page 79.]

But if one keeps going—still with very simple rules—then suddenly one sees this. [See A New Kind of Science, page 81.]

The same kind of complexity that we saw in cellular automata.

So what about other kinds of programs? Well, I looked at many many different kinds. Register machines—kind of like minimal idealizations of machine language, or actually like pieces of Atanasoff’s machine. [See A New Kind of Science, page 100.]

Symbolic systems—like generalizations of combinators or kind of minimal idealizations of Mathematica. [See A New Kind of Science, page 104.]

Or two-dimensional cellular automata. [See A New Kind of Science, page 174.]

Or three-dimensional ones. [See A New Kind of Science, page 183.]

Or tiles that just follow certain constraints. [See A New Kind of Science, page 221.]

And every time one sees the same basic thing: incredibly simple rules can give incredibly complicated behavior. It seems to be a very robust—and very general—phenomenon.

But how come something that fundamental hadn’t been known for ages? Well, partly it’s because one can only easily see it by doing lots of computer experiments—and it’s only with computers, and Mathematica, that those have become easy to do. But more important is that with our ordinary intuition there just didn’t seem to be any reason to try the experiments; it seemed so obvious they wouldn’t show anything interesting.

Well now that one’s made the clear discovery that simple programs—simple rules—can produce complicated behavior, one can go back and find all sorts of hints about it from really long ago. Nearly 2500 years ago the Greeks looked for example at prime numbers. And there’s a fairly simple rule—a fairly simple program—for generating all the primes. But the sequence of primes, once generated, looks remarkably irregular and complicated. [See A New Kind of Science, page 132.]

There’s also a fairly simple rule for generating the digits of pi. But once generated, that sequence looks to us completely random. [See A New Kind of Science, page 136.]

And actually, there are lots of cases of this kind of thing with numbers. Even in those favorite things from traditional mathematical science: partial differential equations.

The ones people usually study show only rather simple behavior. [See A New Kind of Science, page 163.]

But I just did an automated search of possible equations. And pretty soon, I ended up finding these creatures. [See A New Kind of Science, page 165.]

They’re just simple nonlinear PDEs. But even with very simple initial conditions, they end up doing all sorts of complicated things.

Well, actually, it’s a little hard to tell exactly what they do. We’ve used these equations for a while to test the state of the art in PDE solving for Mathematica. But with continuous systems there’s always a problem. Because you end up having to discretize them, and without already more or less knowing the answer, it’s hard to tell whether what you’re seeing is something real. And that’s exactly why things got stuck in studying the foundations of nonlinear systems with scientific computing in the 1950s.

But in a discrete system like rule 30 there are no issues like that. The bits are the bits, and one can tell one’s seeing a real phenomenon. [See A New Kind of Science, page 30.]

And actually, rule 30 is so simple to set up, there’s really no reason the Babylonians couldn’t have done it. And I sometimes wonder whether—a bit like these—an ancient mosaic of rule 30 might one day be unearthed. [See A New Kind of Science, page 43.]

But I rather think that if rule 30 had actually been known in antiquity, a lot of ideas about science and nature would have developed somewhat differently. Because as it is, it’s always seemed like a big mystery how nature could manage—apparently so effortlessly—to produce so much that seems to us so complex.

It’s like nature has some special secret that allows it to make things that are much more complex than we as humans normally build. And often it’s seemed that that must be evidence that there’s something somehow beyond human intelligence involved. But once one’s seen rule 30, it suggests a very different explanation. [See A New Kind of Science, page 30.]

It suggests that all it takes is for things in nature to follow the rules of typical simple programs. And then it’s almost inevitable that—like in the case of rule 30—their behavior can be highly complex.

The way we as humans are used to doing engineering and to building things we tend to operate under the constraint that we have to foresee what the things we’re building are going to do. And that means that we’ve ended up being forced to use only a very special set of programs that happen always to have simple foreseeable behavior. But the point is that nature is presumably under no such constraint. So that means that there’s nothing wrong with it using something like rule 30—and that way inevitably producing all sorts of complexity.

You know, looking back, early scientific computing had quite a few places where essentially the rule 30 phenomenon showed up. But without a framework to understand it, it always got ignored. People thought it must just be some kind of “numerical noise” or something. That wasn’t relevant to whatever it is that they were analyzing.

But now that we know the phenomenon, what does it mean for scientific computing?

Well, first there’s a practical side: it suggests a whole world of new kinds of models, based not so much on mathematical equations, but instead on simple programs.

Whenever one makes a model of something, what one’s trying to do is to capture certain essential features, and then idealize everything else away. And now we’ve got lots of new raw material for making those models. Like let’s say one’s trying to understand snowflakes. [See A New Kind of Science, page 370.]

Well, crystals grow by starting from a seed, then successively adding pieces of solid. And one can try to capture that with a simple two-dimensional cellular automaton.

Imagine a grid, where every cell either has solid in it or not. Then start from a seed and have a rule that says solid will be added at any cell that’s adjacent to one that’s already solid. Here’s what one gets. [See A New Kind of Science, page 369.]


Replay Animation

It’s an ordinary-looking faceted crystal, here on a square grid.

One can do the same thing on a hexagonal grid. [See A New Kind of Science, page 369.]


Replay Animation

But for snowflakes there’s an important effect missing here. When a piece of ice solidifies from water vapor, there’s some latent heat released. And that inhibits more ice from solidifying nearby.

Well, what’s the simplest way to capture that effect? One can just change the cellular automaton to say that ice gets added only if the number of neighboring cells that are already ice is exactly one. OK, so what happens then? Well here’s the answer. [See A New Kind of Science, page 371.]


Replay Animation

These are all the stages that one sees. [See A New Kind of Science, page 371.]

And these look an awful lot like real snowflakes. It really seems like we’ve captured the basic mechanism that makes snowflakes have the shapes they do. And we get various predictions: like that big snowflakes will have little holes in them, from where arms collided. Which indeed they do.

OK, but even though the pictures we’ve got look a lot like real snowflakes, there are obviously details that are different. But one thing one has to understand is that that’s always going to happen with a model. Because the whole point of a model is to capture certain essential features of a system, and to idealize away everything else.

And depending on what one’s interested in, one may pick out different features to capture. And so the cellular automaton model is good, for example, if one’s interested in the basic question of why snowflakes have complicated shapes—or what the distribution of shapes in some snowflake population will be. But it’s not so useful if one’s trying to answer a question like specifically how fast each arm will grow at a certain temperature.

I might say that there’s actually a general confusion about models that often seems to surface when people first hear about cellular automata. They’ll say: OK, it’s all very nice that cellular automata can reproduce what snowflakes do, but of course real snowflakes aren’t actually made from cellular automaton cells.

Well, the whole point of a model is that it’s supposed to be an abstract way of reproducing what a system does; it’s not supposed to be the system itself. I mean, when we have differential equations that describe how the Earth moves around the Sun, we don’t imagine that inside the Earth there are all sorts of little Mathematicas solving differential equations. Instead, it’s just that the differential equations represent—abstractly—the way the Earth moves.

And it’s exactly the same thing with models based on cellular automata: the cells and rules abstractly represent certain features of a system. And again, what abstract representation—what type of model—is best, depends on what one’s interested in.

In the case of snowflakes there are certainly traditional differential equations that could be used. But they’re complicated and hard to solve. And if what one’s actually interested in is the basic question of why snowflakes have complicated shapes, the cellular automaton model is a much better way to get at that.

Well, there are lots of places where similar things happen. Often there’s some simple version of a phenomenon that one can study using traditional scientific computing. But then there’s a more complicated version where traditional scientific computing hasn’t had anything reliable to say. But now, by making models using simple programs, we can start to really understand what’s going on. And that’s true not only in physics, but also in areas like biology.

If you see a pattern like this on a mollusc shell, you might imagine that it was somehow difficult to get, and difficult to model. [See A New Kind of Science, page 423.]

That perhaps it was the result of some long and complicated optimization by natural selection.

Well, one of the things one realizes from rule 30 and so on is that it’s actually easy to make complicated stuff like this. It just takes a simple program to do it. And actually that’s how I think biology does it most of the time.

Biology often just seems to sample lots of simple programs. Like here are the patterns one gets from all possible cellular automata of the simplest kind. [See A New Kind of Science, page 424.]

And here are patterns from a bunch of different kinds of molluscs. [See A New Kind of Science, page 423.]

There’s an amazing similarity. And it really seems like the molluscs of the Earth are just sampling different possible cellular automata—and we’re getting to see the results output on their shells.

Usually we imagine that there can’t be much of a fundamental theory in biology, because what we see must depend on all the accidents in the history of biological evolution. But things like the mollusc shell example suggest something different: that by understanding abstractly what happens with simple programs, we might be able to begin to make a genuine predictive theory of biology.

There are some pretty important implications for molecular biology and computational biology.

One of the key problems there is to understand what are the appropriate primitives to represent processes in cells. Is it just ‘this goes up’, ‘that goes down’? Or is it a set of mathematical equations? Well, I think a much better description is that it’s arbitrary, simple, programs. [See A New Kind of Science, page 30.]

And I wouldn’t be surprised if in the next few years this’ll let us make some spectacular progress in understanding core mechanisms of processes like aging. Much as 50 years the idea of digital information let us understand the core mechanisms of genetics.

You know, the idea of thinking in terms of simple programs has implications all over science. Like in fundamental physics. Where I’m more and more hopeful that it’ll be possible to find a single, simple, program that’ll really reproduce all of physics, and all of our universe.

It’s kind of abstract. It involves discrete networks that are sort of underneath space and time. But it’s looking pretty promising. It’s looking like from an incredibly simple underlying setup one can derive relativity theory, and gravity, and maybe quantum mechanics.

Well, in a sense that’s the ultimate scientific computing. To have a program that can be run to reproduce everything we see in our universe. In a sense to reduce all of physics to pure computation.

But so from this kind of thinking, what do we learn about the foundations of scientific computing?

Well, here’s a very fundamental thing—that I call computational irreducibility. In traditional science, one of the core ideas is what I call computational reducibility. It’s the idea that even though it may take some system in nature a lot of effort to do what it does, we should be able to figure out what it will do with much less computational effort. To work out where an idealized Earth will be a million years from now, we don’t have to trace a million revolutions of its orbit. We just have to plug some number into a formula, and immediately we’ll get out the result. But what about a system like this? [See A New Kind of Science, page 740.]

It’s just a simple cellular automaton, with a simple underlying rule. But how can we work out what it’s going to do? There doesn’t seem to be any obvious way to shortcut the evolution. And in fact, I think it’s computationally irreducible: that however we set up our computations, they’ll always require essentially as much effort as the system itself had to go through to figure out what it’s going to do.

Really, this is a consequence of a very general principle that I call the Principle of Computational Equivalence.

You might think that every different system one sets up would do a different kind of computation. Like Atanasoff’s machine did linear algebra. And one might need a completely different machine to evaluate polynomials.

Well, the key idea that started in the 1930s, and really emerged in the 1940s, is that no, that’s not necessary. It’s possible to have a single, universal, machine that can be programmed to do any kind of computation. And of course that’s a pretty important idea. Because in a sense it’s really the basis of the whole computer revolution.

But somehow that idea has never seemed too relevant to areas like natural science. And certainly if we look at the kinds of systems that we normally use to do universal computation, that doesn’t seem terribly surprising. I mean, the CPU chips in our computers have millions of gates on them and so on. They’re not things we would seriously expect to find just lying around in nature.

And from our normal intuition, we’d assume that simple things in nature—or for that matter simple programs—would only be capable of simple computations. And we’d probably think that the complexity of the computation that would get done in a system would somehow be proportional to the complexity of its underlying structure.

But from the experiments I did on simple programs, I started to suspect something different. That in fact there was a very low threshold, and that above that threshold the kinds of computations that systems do are all essentially equivalent.

Normally in science we’ve made sort of an implicit assumption that observers of systems are somehow computationally arbitrarily more sophisticated than the systems themselves. But what the Principle of Computational Equivalence says is that no, in fact most systems that look complex to us are actually equivalent in their computational capabilities to us. And in fact that’s why they look complex, and that’s why they show computational irreducibility.

And from the point of view of scientific computing, that’s in a sense why it’s needed. One can’t expect to reduce what a system does to some simple formula that one can just evaluate. One actually has to do computation to figure out how the system will behave.

Now, from a practical point of view, computational irreducibility has an important implication for scientific computing. It says it’s really important to pick the simplest underlying models—because one’s always going to have run them for many steps. So this idea of making models in terms of simple programs isn’t just convenient—it’s in a sense fundamentally necessary if one if going to get as far as possible in scientific computing.

Well, OK, so what does all this mean for the future of scientific computing.

For a start, it means we’ve got to get serious about using languages that really support generality. It’s not going to be enough to use Fortran, or some matrix language, to do scientific computing in the future.

One needs the paradigm of symbolic programming to represent the kinds of constructs that are relevant for making good models. And even if one sticks with existing models, one needs the generality of symbolic programming to make the best progress with them.

Finding algebraic formulas is sort of an embodiment of computational reducibility. And we know from computational irreducibility that this isn’t always going to be possible. But to get the most out of computationally reducible situations—like even linear algebra—one has to use all the methodologies one can.

And so for example, when I showed at the beginning our achievements in making the latest version of Mathematica so fast, what’s going on under the hood is in a sense very much based on the generality of symbolical programming. Because what’s happening is that for example inside a numerical computation, one’s doing a little bit of algebra, or vice versa.

In Mathematica we’ve been steadily building up by the far the largest knowledgebase of algorithms that’s ever been assembled. And the thing that’s now happening all the time is that through symbolic programming, all these algorithms are starting to build on each other. And I think we’re going to see quite an acceleration of what can be done in computationally reducible problems.

But what about when there’s a lot of computational irreducibility? Which is what seems to be happening in most of the examples of complexity that we see in natural and artificial systems. How can scientific computing help there? Well, once one has an underlying model, one can just run it. And of course doing that well can involve lots of issues about software and computer architectures and so on.

But how is one to find the model?

Usually that’s sort of a creative act. Or perhaps one has some well-defined class of models, and one can just use statistics to tweak their parameters. But what about models based on simple programs? Well, if a model is simple enough, there’s a bizarre possibility: one can just search through possible models to try to find it.

And in fact, we’re building a giant atlas of simple programs—that’ll soon be available on the web—that will let people do that. At first it seems kind of crazy that one might just be able to search through all possible models to find the one that one wants. But that’s the amazing thing about the computational world, and the Principle of Computational Equivalence. One doesn’t have to go far to find all sorts of complexity—just like what we see for example in nature.

I’m always really surprised at how well this idea of simple programs actually works.

Like for example in an abstract setting I wanted to find the simplest representation for Boolean algebra—for logic. And it turned out that just doing a search I managed to find it. And in fact I won’t be surprised if we can actually find the ultimate laws of physics—the fundamental rules for the universe—just by a search.

There’s quite a lot of technology involved in doing that search. And you can watch for some of it in future versions of Mathematica. But I think it’s really going to work. And be pretty exciting.

Well, so what does this all mean?

I think the key to the future of scientific computing has to be a broadening of its foundations. Going to models—to primitives—beyond the ones of traditional mathematics.

Atanasoff could have done that in the 1930s. He just would have had to hook up his triode switches in different ways. Not to make an arithmetic unit, but say to implement a cellular automaton. But without the context that we now have, there wouldn’t have been any reason to do that.

But now that we know about simple programs and what they do, we can start to build a new kind of science that’s based on them. And indeed that’s what I’ve tried to do over the past twenty years, and in the big book I published last year. [See A New Kind of Science.]

There are many applications of the new kind of science: to making new models of things, and to making new kinds of technology. But at the core is a new kind of basic science. That in many ways is like a physics, or a chemistry, or a mathematics. But concerned with the computational world.

It could perhaps have happened a little earlier in history, but right now is an exciting time. When we first get to explore what’s really out there in the computational world. To look at trillions of different simple programs and see what they do. See ones that capture the essential mechanisms of important phenomena in nature. Or ones that represent algorithms or structures important for new pieces of technology.

When Atanasoff designed his computer, he had a very straightforward algorithm for solving linear equations. Well, in Mathematica we have much more developed algorithm. But it still in a sense has a simple, engineered, underlying structure. But somewhere out there, deep in the computational world, I’ve no doubt that there’s a much better algorithm. That wouldn’t be easy for us as humans to construct, but that we can certainly use once we’ve found it.

Our Atlas of Simple Programs is in a way a bit like a hugely generalized chemistry database. With compounds that turn out to be important for some process in nature—or for some industrial purpose.

There is much to mine in that world of simple programs—for science, for modelling, for art or for technology. And what we find will, I think, be what dominates the scientific computing of the future.

In what we have done so far in scientific computing, we have used only a tiny part of what the concept of computation makes possible. But now, from that idea of digital computation that began to take shape all those years ago, we begin to see the vast computational universe. And exploring that computational universe is, I think, the shared destiny of computing in science, and the science of computing.

Thank you very much.

Publications by Stephen Wolfram »