Computation and the Future of the Human Condition

This talk was given at the H+ Summit @ Harvard in Cambridge, Massachusetts on June 12, 2010.

Well, today I'm going to do something pretty unusual for me: I'm going to talk in public about the future.

You know, I've spent most of my life just quietly trying to make the future happen, mostly in science and technology.

But partly because it's useful to me, and partly just because it's interesting, I also do quite a bit of thinking about what might happen in the future, both near and far. And I thought here today it would be fun to share some of that with you.

The things I'm going to talk about are mostly things I've been thinking about for a long time. I think I've made some progress with them, but I definitely haven't even close to figured them all out.

But to get into what I have figured out, I first have to tell you about some slightly elaborate and abstract things.

One day I'm sure all of this will be commonplace. But right now they're still off on the front lines—things that have emerged from the science I've worked on.

Well, OK. So really from a conceptual point of view the core framework is the idea of computation.

And you know, when we look back, I think it's going to become clear that computation is by far the most important idea that's emerged in the past century.

But in the future I think it's actually going to become still even more important. Until it's really the dominant theme of our very nature and existence.

And that's what I want to get to here today—and talk about some of the implications.

 

Well, in my life so far, I've basically done three large projects.

Enlarge
			Image

And each of them in a different way informs my view of the future.

Mathematica in showing me what large-scale formalization can achieve.

Wolfram|Alpha in helping me understand the span of human knowledge and the automation of a certain kind of intelligence.

But for our purposes here today, the most important is A New Kind of ScienceNKS. Because it provides the paradigm for what I'll be talking about.

And what NKS is really about is the core concept of computation.

You know, when we think of computation today, we typically think of all those sophisticated computers and programs that we've set up to do particular tasks.

But what NKS is about is the pure basic science of computation—the science of what's out there in the computational universe of all possible programs.

Here's an example of a really simple program. It's called a cellular automaton, and it's the first type of simple program that I studied nearly 30 years ago.

Enlarge
			Image

It's just got a line of cells, each black or white, and each updating its color down the page according to that simple rule at the bottom.

Well, OK, so this does something pretty trivial.

But what if we change the rule?

Enlarge
			Image

It's still pretty trivial.

Enlarge
			Image

Here it looks more complicated.

Enlarge
			Image

And if we go on for a while this gets pretty intricate. But it's still in a sense very regular.

So then we have to ask: out in the computational universe of all possible programs, do all the simple ones produce behavior that's somehow ultimately simple?

We can't figure that out by pure thought.

But it's easy to just do a little Mathematica experiment to find out. Just enumerate all possible simple programs of the type we're looking at, and see what they do.

Enlarge
			Image

Well, we can see that lots of different things happen.

Most of it is pretty simple though.

But what's happening here—with this program that I call "rule 30"?

Enlarge
			Image

Let's run it a while longer.

Enlarge Image

Well, this is a remarkable thing.

Actually, I think it's the single most remarkable thing I've ever discovered.

It's a very simple rule—a very simple program. And we're just starting off from one black cell at the top.

Yet we're generating all this amazing stuff.

It's got some regularity. But a lot of it is complicated enough that it basically just looks completely random.

Well, that really blows up our usual intuition.

I mean, from doing engineering and so on, our experience is that to make something complicated requires having complicated rules, going to lots of effort, and so on.

But here when we look out into the computational universe, we're seeing something completely different, and completely unexpected, going on.

You know, I view this as my little analog of Galileo's discovery of 400 years ago—looking out into the astronomical universe, and seeing moons orbiting Jupiter.

Well, from that observation of his we can trace the development of a lot of modern science and engineering, and actually a lot of our modern ways of thinking.

So... what's going to emerge from rule 30?

There's science. There's technology. And then, I think, there are going to be a lot of other things.

Of course, it takes me, at least, years, if not decades, to understand the implications.

But let me talk about some of what I've figured out so far, and what I suspect.

You know, when one looks at rule 30, or at other cellular automata with simple rules, probably the first thing that comes to mind is stuff one sees in nature.

Well, I don't think that's a coincidence. And in fact in things like rule 30 I think we're seeing what is in effect the key secret of nature.

When we build things with engineering today, we're in effect restricting ourselves to particular kinds of programs whose behavior is simple enough to foresee.

But nature is under no such constraint. So it can in effect just go out into the computational universe and pick any program.

And what we've discovered from NKS is that it's a basic scientific fact that many of those programs are like rule 30—and show incredible complexity.

In the past, we might have assumed that to get so much more complexity than we usually create with engineering must require something vastly beyond human sophistication—maybe some kind of deity.

But nothing superhuman is needed, it's just little tiny programs in the computational universe.

 

Well, OK. So in NKS the first step is just to go and see what's out there in the computational universe.

It's like any kind of exploration—of animals or chemicals or whatever.

But then to make a science one wants to organize and classify what one sees.

And if one's lucky one can then formulate some general principle.

Well, in NKS we have such a principle. It's called the Principle of Computational Equivalence.

And roughly what it says is this.

Think about all the programs out there in the computational universe as doing computations—say taking certain input at the beginning, and generating certain output as they run.

Well, there are some programs that behave in obviously simple ways—and in a sense just do quite trivial computations.

But what the principle says is that beyond those, all other programs do computations that are exactly equivalent in their sophistication.

It could have been that if one made one's programs progressively more complicated, they'd always be able to do progressively more sophisticated computations.

But the Principle of Computational Equivalence says, no, that's not how things work.

Instead, after one passes some very low threshold in behavior, every program does computations that are exactly equivalent in their sophistication.

A century ago, when one thought about computation, one imagined having adding machines, and separately having multiplying machines, and so on.

But in the 1930s the big discovery of Alan Turing and so on was that in fact one didn't need all those different kinds of machines.

Instead, one could have a single, universal machine, which could be programmed to do any computation one wanted.

Well, within a couple of decades that abstract mathematical idea had spawned the software industry, and had started the whole computer revolution.

And today we're still chasing the idea, building all these sophisticated pieces of technology that are universal computers.

But now the Principle of Computational Equivalence tells us something much more extreme.

It says we don't need all that sophisticated technology. Even systems with very simple rules—and simple inputs—can do in a sense arbitrarily sophisticated computations.

And there are lots and lots of implications of that.

For a start, the principle gives us a prediction.

It says that we won't have to go far in the computational universe before we start seeing systems that are computation universal.

Well, we can test that prediction.

And for several kinds of systems we now know that it's indeed true.

In cellular automata this little rule is computation universal. And this is the simplest Turing machine that's universal.

Enlarge
			Image

So we don't need all that elaborate stuff that the history of our technology development has produced in order to make a universal computer.

These little rules are enough.

And of course that's significant when it comes to thinking about making computers—say out of molecules.

But something else it means is this: that it doesn't take us and our civilization, with all its sophistication, to make something that's computationally sophisticated.

It takes only a very simple system—of a kind that we can expect shows up all over nature.

OK. Well, what does this mean?

It has pretty big implications for science.

The exact sciences that have grown up since Galileo's time have always prided themselves on being able to make predictions.

But how does that really work?

Well, to make a prediction, we have to be able to somehow out-compute the system that we're trying to predict.

Well, for systems like idealized planets orbiting a star, that's always been possible.

We don't have to trace every point in each orbit; we can just have a little computation that jumps immediately to the answer.

In effect, we can computationally reduce the behavior of the system.

But will that always be possible?

The Principle of Computational Equivalence implies that it won't.

And in fact it implies that even among very simple programs in the computational universe, it's common to find computational irreducibility.

The exact sciences have always avoided systems that work like this.

But they're all over the place.

We've always implicitly assumed for our science that we as observers or predictors of systems are much more computationally sophisticated than the systems we're observing or predicting.

But the Principle of Computational Equivalence says that this isn't true.

And that instead we are just equivalent to the systems.

So that we can never expect to outrun them.

And to find out what they do we have no choice but to simulate each step in their behavior, or in effect just to watch how the behavior unfolds.

Well, this phenomenon of computational irreducibility turns out to be pretty important in thinking about our future.

Of course, it also tells us that it can be fundamentally hard to predict the future—because there'll be computational irreducibility, where you just have to wait and see it unfold.

But still, it's an inevitable feature that when computational irreducibility is present, there will always be an endless collection of pockets of reducibility—that allow at least some kinds of predictions.

 

OK. Well. Let's start off by talking about technology.

What will the future of technology be?

You know, almost all the technology we have today is created in a very incremental way.

We take what has existed historically, and we extend it.

And at each step we try to understand what the consequences will be.

Of course, in a sense it's this historical aspect to technology that makes it so easy to recognize most artifacts.

Just like biological systems—which are also usually easy to recognize—they have lots of components that have arisen through a long process of evolution.

But the question is: is this kind of step-by-step engineering the only way to create technology?

In a sense, exploring the computational universe of possible programs gives us a way to explore all possible mechanisms.

Some of the simpler ones we can readily recognize as being widely used in today's technology.

Repetitive processes. Branching and nesting. And a little more.

But there's a vast amount out in the computational universe that isn't at all like today's technology.

It's just programs doing things. Often they're complicated and sophisticated things.

That look very different from today's technology. Much more complex. More like many systems in nature.

But the question is: are these complicated and sophisticated things useful?

Can they be used to achieve human purposes?

Because, after all, that's what technology is all about: setting up systems that achieve human purposes.

Well, let's look at rule 30 again.

Enlarge
			Image

Is what it does useful?

Well, some parts of it most definitely are.

In fact, we've used rule 30 to produce great random numbers in Mathematica for the past 23 years.

Of course, I didn't construct rule 30—as a traditional engineer might—specifically to produce random numbers.

I just found the system—and then later noticed a use for it.

And actually, that's not so different from how some of the foundations of traditional technology came to be.

Out in the world, materials that are magnetic materials, or act like liquid crystals, or something, were discovered.

And for a while these properties just seemed like curiosities. But eventually it turned out they could be harnessed for things that are incredibly useful for human purposes.

And it's the same story in the computational universe—there are lots and lots of systems out there that do interesting things.

But the issue is to match up those behaviors with human purposes.

In traditional engineering, one starts with some purpose in mind, then explicitly tries to construct a system that achieves that purpose.

And typically at each step one insists on foreseeing what the system will do.

With the result that the system must always be quite computationally reducible.

But in the computational universe there are lots of systems that aren't computationally reducible.

So can we use these systems for technology?

The answer is absolutely yes.

Sometimes we look at the systems and realize that there's some purpose for which they can be used.

But more often, we first identify a purpose, and then start searching the computational universe for systems that can achieve that purpose.

Things like this have been done a little in traditional engineering—even, say, with Edison searching for his light-bulb filaments.

But it's vastly more efficient and streamlined in the computational universe.

Just abstractly enumerating trillions of programs, and testing them for particular features.

Well, we've actually used this technique a lot. Finding simple programs that serve as algorithms for useful things.

Within Mathematica, for example, there are more and more algorithms that weren't explicitly constructed by humans—but instead were just found by searching the computational universe.

Algorithms not only for things like random number generation, but also for function evaluation, data manipulation, image processing, and lots of other things.

And Wolfram|Alpha would not have been possible without this kind of approach. Searching for algorithms for linguistic analysis, or data presentation.

It's interesting. Sometimes the algorithms one finds in the computational universe work in some obviously clever way that one hadn't thought of. But more often their behavior just looks very complicated—and no more human-understandable than lots of systems in nature.

It's worth realizing that by purely searching the computational universe one has the opportunity to be really surprised: the things one finds may have no connection with anything one's seen before.

One can also emulate biological evolution and natural selection: starting from one system, and then randomly changing it to try to make it better.

And sometimes that approach will lead to something unexpected.

But inevitably it's in a sense less "original" than pure searching—where there's no reason to be close to anything one's seen before.

Well, we've been very successful in "mining" the computational universe to find useful algorithms.

And my guess is that this kind of approach will be a central theme in the future of technology.

That it will not be about incremental construction, but rather about searching for useful systems.

Now, with traditional intuition, such an approach would seem crazy. Because one would expect that to find systems that could do interesting things, one would have to enumerate far too many possibilities.

But one of the discoveries of NKS—and implications of the Principle of Computational Equivalence—is that you don't have to go far at all in the computational universe to find immensely rich behavior.

So it becomes sensible to search for it.

Well, OK, today most of our technology has all this obvious visible connection to its historical development. Whether it's wheels and levers in mechanical systems, or simple loops and divide-and-conquer algorithms in computational systems.

In the future, that will be decreasingly true.

More and more of our technology will just be mined from the computational universe.

Known to achieve particular purposes. But not in ways that we as humans can readily understand.

In some sense, one might view the current stage of much of our technology as somehow "Galilean". Operating in a sort of clockwork way, like the Galilean solar system.

With mass production lines, and mass transportation running at periodic times.

In recent years, though, we sometimes see something else going on.

Particularly with technological systems—like computer networks—that are composed of many parts, each independently running some particular algorithm.

Such systems begin to resemble a cellular automaton.

In which individual elements may operate according to simple rules, but the system as a whole shows all sorts of complex behavior.

Typically at present that complex behavior—say in a network—is viewed as being something perhaps intellectually interesting, but not directly relevant to the purpose of the system.

But as whole algorithms come to just be found in the computational universe, it will increasingly be the case that the complex behavior systems show is intimately related to their purposes.

Here's an immediate example, more from art than technology.

One important feature of finding things in the computational universe is that some kinds of invention and creativity are automatic, and free.

So here's a little project we did a few years ago that finds original music—in a sense plucking it from the computational universe.

Enlarge
			Image

It's very important that there are rules—there's a logic—to each one of these musical pieces.

But it's also important that even though there are underlying rules, the overall musical form produced is rich and complex enough to be very interesting.

Each little program in the computational universe—in this case cellular automata—just does its thing.

But given a musical style, we are capturing programs that follow it.

And thereby achieving a human purpose of creating musical forms—for unique cellphone ringtones or whatever.

So, OK, there's a lot of possible technology out there in the computational universe.

What will make us use it?

The most obvious thing is that it achieves purposes we already have today in more efficient ways than they could be achieved before.

One can see that quite explicitly. One can, say, enumerate all possible Turing machines that achieve the purpose of performing a particular computation.

Some of the Turing machines will perform the computation in an easy-to-understand, pedestrian way.

But some of them may do so in some elaborate, seemingly quite random way.

And often it is these Turing machines that do the computation most efficiently.

There might be a program that we could construct step by step.

But by searching the computational universe, we can find a much more efficient program—that will typically happen not to be readily understandable by humans.

And in fact, as we try to make technology more and more efficient, it is inevitable that it will show more and more computational irreducibility—and therefore be less and less understandable.

But, OK, so in the computational universe we can find all this potential technology.

And almost any purpose we come up with, we'll be able to find some program—some mechanism—out there that can achieve that purpose for us.

Well, I guess the first question we can ask is: what programs have purposes?

If we just pick, say, a random cellular automaton out there, does it operate with a purpose, or not?

 

How do we tell if something has a purpose, anyway?

Well, even in everyday situations, it may not be terribly easy.

When we see some complicated object, what is functional about it? What is ornamental? What is ceremonial?

Think about Stonehenge. Or think about early handprint cave paintings.

It seems like the only way to tell if these things have purposes is to have some kind of historical, cultural connection to them.

I mean, if you find a flint from the Stone Age, and it's been chipped away, was it a Stone Age tool built for a purpose, or just some stone with random chippings on it?

And what if you see a splash of paint on a canvas? Is it meaningful, purposeful modern art? Or just a random splash of paint?

When we extend these questions about purpose to more abstract settings, we end up asking a lot about meaning.

Let's take mathematics for example.

There's a certain set of axioms that essentially all the mathematics that's done today is based on.

But they're certainly not the only possible axioms.

Out in the space of all possible axiom systems, there are infinitely many other ones.

So why do we pick the particular axiom systems we have for mathematics?

Are they somehow more meaningful? Are they "about something", when the other ones are not?

Well, one can do all sorts of analysis, trying to find ways in which the axiom systems we actually use for mathematics are special.

Well, I haven't found any.

And it's not even, for example, that these axiom systems are somehow relevant to the physical universe.

They just describe particular corners of it. But given NKS we've found lots of other parts, too.

And what I've concluded is that actually the mathematics we have today is really just a historical accident: the direct generalization of the arithmetic and geometry that happened to be used in ancient Babylon.

So it's just history that makes the particular axiom systems we're using seem meaningful to us.

 

Well, so, is there any criterion we might use to decide abstractly if a thing has a purpose?

Well, surprisingly enough, maybe there is.

Think about some cellular automaton or something.

There are two ways we can describe it.

One is by its mechanism—just by saying how its rules make it do what it does.

But another is by saying that it's achieving some particular purpose.

Well, sometimes it's a lot easier to describe things in terms of mechanism.

But sometimes it may be difficult to describe something in terms of mechanism, but easy to say what purpose it achieves.

Things fall to the ground, minimizing their potential energy, even though the precise mechanisms of their motion may be complicated.

OK, so here's a way we might imagine figuring out if something was created for a purpose or not.

We figure out a purpose that can be ascribed to its behavior.

And then we ask whether the construction of the thing is somehow minimal for achieving that purpose. Or whether instead the thing has all sorts of extraneous stuff that's irrelevant to this purpose.

Well, I think that's a good definition in principle. Actually, it turns out to be useful, for example in identifying interesting theorems derived from axiom systems in mathematics.

But, the embarrassing thing is that most of our fine technology today fails this test.

The things we build are far from minimal in achieving their purposes.

They're instead full of historical baggage that comes from the incremental character of traditional engineering.

And the incremental character of evolution makes it the same story in biology, too.

 

Well, OK, so maybe that test is too stringent. Perhaps there's another.

Remember our objective: we know that there are all sorts of possible mechanisms and processes out there in the computational universe.

We want to know which of them could end up being useful for technology in the future.

Which ones can have purposes for which they can be harnessed?

 

Well, there are all sorts of practical cases where one tries to identify if something was done on purpose or not.

Like you might want to detect fraud in some collection of numbers, or you might want to know whether a piece of wreckage in an ocean was part of a man-made object or not.

Well, in practice people usually look for some simple kind of regularity—like, say, straight edges in the wreckage.

Here's another case: let's say we're interested in extraterrestrial intelligence.

We want to know if some signal we detect was intentionally created by an extraterrestrial intelligence or not.

Well, I must say that my view of extraterrestrial intelligence has really been turned upside down by NKS, and by the Principle of Computational Equivalence.

Let's start with some definitions. Before we even get to intelligence, let's talk about the definition of life.

Well, it's usually pretty easy to tell if something we encounter on Earth is living or not.

I mean, it has all that shared history, with RNA and cell membranes and everything.

But what about a more abstract definition of life?

Well, the Greeks used to think that anything that moved itself must be alive.

But then there were steam engines and things.

The Victorians thought it was something about thermodynamics and the Life Force.

Then people thought it might be something about self reproduction.

But actually all these abstract definitions really don't work.

We can say that there's a necessary condition for life: that the system exhibits sophisticated computation.

But beyond that, there really doesn't seem to be any kind of abstract definition one can give.

The practical definition for us is based on history—and based on the actual historical properties of life on Earth.

Well, what about intelligence?

It's pretty much the same story.

It's clear it's difficult to identify, because we even have trouble very close at home. Understanding when babies exhibit what level of intelligent behavior, or whether something like a whale song is really showing intelligence.

And actually I just don't think there's an abstract definition of intelligence either.

The only thing that characterizes intelligence is a necessary condition: that a system is capable of sophisticated computation.

You know, we have expressions like "the weather has a mind of its own".

And what the Principle of Computational Equivalence says is that that's not so silly after all.

The fluid dynamics of the weather has just the same computational sophistication—and in a sense mind-like behavior—as a brain.

Well, this is where things start getting very confused with extraterrestrial intelligence.

One might have thought to do sophisticated computation—and say to be able to generate the primes or something—one would need a whole elaborate civilization that built computers and so on.

But what NKS and the Principle of Computational Equivalence tell us is that no, that's not the case.

It's possible to get sophisticated computation even in systems with very simple construction. Like cellular automata. Or like many kinds of systems that are in effect just "lying around" in nature and in the universe.

There's an interesting story.

A century ago, Marconi was in the middle of the Atlantic on his yacht. And since he was in the radio business, his yacht had a radio mast.

And he discovered that even in the middle of the Atlantic he was hearing all this funny whooshing and clicking through his radio.

And his first guess was that he was hearing radio signals from the Martians.

It seemed like complex, purposeful signals.

Well, as we now know, it wasn't the Martians. Actually, it was just the physics of the ionosphere.

But it was hard to tell that there wasn't a mind—an extraterrestrial mind—behind all this complex behavior.

Well, OK. Pretty much the same thing happened with pulsars. But now it was the regularity of the pulses that made people think they must be intentional.

Of course, one can flip things around, and ask how the extraterrestrials might figure out that there was intelligence here on Earth.

Again, it's not easy.

Looking at the Earth from space, for example, there's surprisingly little.

Astronauts have told me that the most obvious things are a 25-mile-long causeway in the Great Salt Lake in Utah that separates bodies of water with two different algae colors—and happens to be perfectly straight.

And a perfect circle around a volcano in New Zealand—that I suppose might come from the physics of the volcano—but actually is the border of a national park that doesn't allow sheep to graze inside.

Thin stuff, to be sure. And, yes, you can think of lights at night. Though compare those with lightning strike patterns.

Or maybe you can think of radio and cellphone transmissions. Those have historically had lots of regularity—but actually they're beginning to approach the technology optimization point I mentioned before—doing more compression and so on, and being spread spectrum, and starting just to look random.

Well, let's say we wanted to make a beacon of intelligence for ourselves.

Gauss suggested cutting down trees in the Siberian forests to represent the standard picture for proving Pythagoras's Theorem.

And you might think: let's send solutions to difficult computational problems. Or send a representation of things we've figured out about how the universe works.

Well, it's an interesting exercise. You can go through all these things. I've never found one that doesn't unravel.

Imagine you're a really advanced intelligence. And you know how to move stars around.

What pattern would you move them around in to make the best "intelligence ad" for yourself?

If you make them too geometrical—like a triangle—there are natural phenomena, like Lagrange points, that can do that too.

Maybe you should make them somehow artistic and beautiful—just like those pictures of nebulas!

Well, in the end I think one's just going to realize that there's no abstract notion of intelligence, extraterrestrial or otherwise.

And the thing we're really talking about when we talk about "intelligence" is human-like intelligence.

As for life, intelligence is not something absolute and abstractly definable. It's something in a sense historical—defined by its connection to a thread of history.

 

OK, so what does all this mean for the future of us humans?

Where will our technology go?

So far we've mostly been on a path of incremental change, much like biological evolution.

But when we start fully exploring the whole computational universe, there'll be a vast explosion of possibilities.

Much bigger, for example, than something like the Cambrian explosion in biology, which still had to handle survival of each intermediate form.

But even though there's a vast universe of possibilities out there for technology, the big issue for us, I think, is what parts of it we'll choose to pursue.

What will be the future of our historical path through the space of all possible technology?

In the past we were largely limited by what technology we could conceivably reach.

In the future, I think we'll be much more limited by what we choose to consider useful as technology.

What will limit us is not the possible evolution of technology, but the evolution of human purposes.

 

The end point could be quite bizarre.

Let's imagine that we can make our existence entirely digital—entirely computational.

And that we can implement all that computation at the level of individual atoms and electrons and so on.

Then all of our existence is defined by the elaborate motions of various atoms and electrons in some lump of material.

Well, here's the issue.

The Principle of Computational Equivalence tells us that at a fundamental level these processes are no more computationally sophisticated than lots of other processes that happen with atoms and electrons and so on.

There's no abstract computational way to distinguish the lump of material that has us encoded in its behavior from one that's just an ordinary lump of material, doing its ordinary physics.

It's sort of humbling. After all that development of our civilization, and of our technology, we end up being patterns of behavior that are not fundamentally different from patterns of behavior produced by physics all over the universe.

It feels like we just went in a cycle, winding up after all our achievements back as a simple part of nature.

Well, in some sense that's true. But in some sense it's not.

Because even if we can't make a sharp abstract distinction between us and nature, there is certainly a historical distinction.

The lump of material that is our future is tied by a thread of history to our present and past.

There may be nothing abstractly special about it. But its details are special—at least to us—because it's all about us.

Again we think about the extraterrestrials.

Imagine we find this lump of material that encapsulates some very advanced extraterrestrial civilization.

The Principle of Computational Equivalence says it'll just be doing computations that are pretty much like any other lump of material.

Nothing special in an abstract sense. Just special because of its particulars—its specific connection to the history of the extraterrestrial civilization.

I might say that even though the Principle of Computational Equivalence appears here as a something of downer—telling us we can't be abstractly special—it tells us something satisfying, too.

It tells us that there's computational irreducibility—and that's what in a sense makes history, including our history, meaningful.

I mean, suppose our history—the history of our civilization—was computationally reducible. Then one would always be able to jump ahead and see the results of history. There would be nothing necessary about all the effort we go through to actually live our history.

But computational irreducibility in a sense implies that there is a concrete achievement to that history—one needs to go through all that history to find its outcome.

 

There's something else going on here, too.

When we talk about intelligence, we really right now only have one example of it: human intelligence.

So the first question is: can we generalize at all away from that?

Can we create artificial intelligence, for example?

Well, that of course depends on one's definition.

We have to know what counts as intelligence.

Now in the early days of studying artificial intelligence—say 50 years ago—people kept on identifying particular features of our human intelligence, and saying, "When you can do that, we'll know you've achieved artificial intelligence."

Sometimes it was playing chess. Doing symbolic integrals. Recognizing objects. Navigating around.

Well, one by one each of these tests has been passed.

You know, Mathematica does symbolic integrals, for example, better than any human.

Of course, it doesn't use human methods. It just blasts through to the answer using fancy math.

And that means, for example, that when Wolfram|Alpha shows the steps for doing an integral it's really a fake: it's something that was reverse engineered from the answer, and specifically set up to be human understandable.

You know, it's interesting how Wolfram|Alpha relates to artificial intelligence.

It does pretty well on answering factual questions like the ones that were fed to computers and robots in old science fiction.

But it produces those answers in a very non-human way.

Like let's say you ask it to solve a physics problem.

Well, thinking in sort of classic AI style you might think: it should solve the problem by human-like reasoning, by figuring out how this object affects that object in this way, and so on.

But that's kind of medieval. It's kind of like old-fashioned natural philosophy.

But 300 years ago we had Newton and friends, who brought mathematics into natural philosophy.

And gave us ways to in a sense use math to blast through to the answer to physics problems, without any human-like reasoning.

It's funny. Inside Wolfram|Alpha it's doing all this sophisticated, intelligent-like computation.

Computation that can compute all kinds of things.

A bit like all those cellular automata out there in the computational universe.

But the question is: can that computation be tied to human purposes?

Well, in Wolfram|Alpha that turns into the practical issue: can we understand those strange utterances that humans type into the input field?

Can we map that approximation to natural language into particular instances of all the possible computations that we can do?

And actually, we can tell that we humans are really holding it back.

Our language expresses only a tiny subset of all possible computations.

We'd do much better if we were building up arbitrary programs in the Mathematica language.

But in ordinary human everyday language, we end up mainly describing quite computationally reducible processes.

And in effect, only being able to tell the engine inside Wolfram|Alpha to do a tiny fraction of all the computation it's capable of.

But from the point of view of human-like intelligence, it's in a sense already doing too much.

It'd surely fail a Turing test—it knows far too much, and can figure far too much out.

 

So in general when we talk about creating artificial intelligence, we're not talking about achieving some amazing abstract endpoint of computational ability.

We're talking about getting computations set up that are really human-like. That mimic our detailed human features.

We don't want to give our computers their heads; we need to keep them reined in if they're going to show human-like intelligence.

Well, of course, there are all these arguments about how our computers can't really be intelligent like us. Because they don't have self awareness, or emotions, or creativity.

Well, in the past, it might have been thought that they were somehow too simple, too logical, in their behavior. That they could somehow do "only what they were programmed to do".

Well, our everyday experience of computers makes it quite clear that that isn't so. Computers are continually doing rich and unexpected—if not always desirable—things.

We can't foresee their bugs or their behavior a lot of the time.

And in a sense this is just a sign of the general phenomenon of computational irreducibility.

Indeed, computational irreducibility is what allows one to go from underlying deterministic rules—even simple rules—to arbitrarily rich, complex, and unpredictable behavior.

And among other things, I think it's at the center of the phenomenon of free will.

That there can be a kind of irreducible distance between underlying deterministic laws, and overall observed behavior.

So that the overall behavior can appear free of the determinism of the underlying laws.

Let me not get sidetracked onto this now. But suffice it to say that I think there is perfectly good free will in lots of computational systems; it's certainly not a special human feature.

Well, OK. My guess is that all these "special human features" that people have identified will gradually melt away as practical technology advances, just as they have done for the past several decades.

Like creativity, for example. One might say, "A computer will never invent anything."

Well, from my point of view, that's really silly. By mining the computational universe we've had computers invent lots and lots of things that we use every day—and that, for example, in Wolfram|Alpha, millions of other people use every day too.

And, you know, our WolframTones project is a pretty good example of computers being quite creative, in an artistic sense.

They're not composing long pieces of music with all sorts of cultural stories associated with them.

But when it comes to short pieces of music, they're doing remarkably well. At least sufficiently well that rather a lot of good human composers seem to come to the WolframTones site to get "creative inspiration"—from a computer.

And actually, I'm guessing that if one did a short-piece-of-music Turing test right now, WolframTones would do pretty well relative to the humans.

 

Well, OK, so I think it's really not hard to achieve the abstract attributes that one might associate with artificial intelligence.

But to create artificial intelligence that'll pass the Turing test—and take over customer service, and all that—the hard work that has to be done is on encoding all the human-like qualities.

In a sense carving out of the space of all possible computations those that we humans, in our current state, care about.

 

OK. So let's get back to talking about our future.

There's a vast space out there of possible things we can harness for technology, and that we can in a sense bring into our lives.

But it's really a question of the evolution of our purposes what we will choose to do that with.

I mean, if we look at our existence today, we do many, many things whose purpose would have seemed completely mysterious in the past—even the quite recent past.

Things like walking on a treadmill—or do all sorts of social networking things.

So how does purpose evolve?

At least so far, there are plenty of areas where it really doesn't seem to very much.

Areas particularly that are tied to our biological nature.

But what about areas that are tied more to our cultural or intellectual development?

There is a question of where there is really progress—perhaps inexorable progress—and where not.

In philosophy, for example, we are still discussing many of the same questions as in antiquity.

But in mathematics, for example, we have laid down layer after layer of results, inexorably building more and more—and progressively evolving what in the space of all possible mathematics seems to us interesting or purposeful.

What about art? Is there progress? It certainly does not seem inexorable, save perhaps for the fact that as time goes on, more and more corners of the space of possibilities are explored, and thereby develop a historical context.

But one area where we can probably say there has been inexorable progress is technology.

As our civilization has progressed, we have gradually built out in the space of all possible technological systems.

And while we may soon routinely mine the computational universe essentially at random to find our technological systems, the same probably cannot happen for the evolution of our purposes.

Because the whole point is that purpose is defined only with respect to a thread—and in effect a sequential thread—of history.

 

I must say that I find it very interesting to imagine our purposes in the future.

But it is difficult. Because we only tend to understand the next possible purpose when we get to live within the environment created by the previous one.

In my own life I have been lucky enough to create a few new scientific and technological paradigms.

And each time it takes me years, given a particular paradigm, to see what now becomes interesting—and purposeful—to do.

Without the paradigm of NKS, for example, it would have seemed pointless to study all these cellular automata with their complicated—and effectively incomprehensible—behavior.

But NKS gives us a context for this—and in effect defines new purposes.

 

It is interesting to see how the course of history has affected human purposes.

Often they have been dominated by religions. And often they have also in effect been dominated by fluctuations: by the emergence, for example, of a particular leader who has defined some purpose that is widely followed.

One can think about the space of all possible purposes, and about how different peoples' purposes and motivations are distributed across that space.

There is, as one knows from practical experience, plenty of diversity. There are those who want to be rock stars, those who want to live quiet considered lives, those who want the richest social connections, and those who just want to feel happy. People scattered all over a psychological map of motivations.

Perhaps natural selection has had something to do with shaping this diversity. It's like immunological diversity: a good thing in preventing something bad from affecting too large a fraction of the population. A good way to add robustness.

In our fairly recent history, the globalization of culture has perhaps acted in the direction of reducing the diversity of human purposes. But the globalization and resulting economy of scale has also served to lower the cost of diversity, tending to allow diversity to increase.

It's interesting to think about the limits to the diversity of purpose.

If we get too far afield from our historical purposes, we will not even recognize purpose in something.

One good test case that I suspect will soon be accessible is animals.

What purposes do animals have? Why do they do what they do?

Sometimes we can easily identify their purposes. Look for food. Find a mate. But when it comes to finer issues, it can be very difficult.

It does not help that their worldviews may be very different from ours.

Even their sensory experience of the world—which might be through smell not sight, for example—may be very different.

You know, ages ago I was once challenged to quickly think of an invention—any invention.

And the one that came out first was what one would now describe as "apps for pets".

What is it that a cat or a mouse would find interesting to do—or play with—on a computer?

You might say: it'll have to be something very directly connected to its "instincts"—like about finding food.

But we know there's more abstraction than that. Like with a cat with a ball of wool.

Well, I'm wondering what the interesting app for a cat on an iPad will be. Or for a whole cat community.

What kind of purposes will emerge for cats? What will their aesthetic be? If we're building the apps, they'll probably start close to human purposes.

But from cat usability testing or whatever, they'll perhaps evolve into something different. A set of purposes that come from different history and different biological constraints than our human ones.

You know, I've really been meaning to try this as an experiment. And I really should.

But—as my children point out with great amusement—I happen to be somewhat allergic to pet fur, so I'd be doing the very inadvisable thing of starting a business where I'm allergic to my customers...

 

OK, anyway. Back to the evolution of purpose for humans.

There is of course one common feature of humans that defines some aspects of purpose: our mortality—the finiteness of our lives.

So let's talk a bit about that in the context of the computational world view.

You know, I view our bodies as working a bit like operating systems.

There are all these different pieces—drivers for this and that, and so on.

And over the course of our lives, we have to react to all sorts of external stimuli.

Well, every so often something goes wrong. Either because of a program bug—a genetic bug.

Or because of something to do with a stimulus. That, say, causes a module to blow up, like a driver in an operating system.

Well, gradually all kinds of gunk builds up in the state of the operating system.

And eventually for one reason or another, it crashes—it dies.

But of course we can just reboot it, just as we could start the next generation of humans from the same DNA.

It's fun to think a bit about diseases of operating systems.

To imagine a medical-like classification system for them. Diseases of the display subsystem. Diseases of the memory manager.

There'll be some hierarchy to them. Some fairly distinct diseases.

But inevitably there'll just be cases where there's some irreducible sequence of computations that ends up with something bad.

It might even be formally undecidable whether something ever halts—or whether say, something, like a tumor, grows forever, eventually overrunning the resources of the computer.

Well, at least from my point of view, it'd be really nice to be able to find ways to extend the lifespan before the system crashes.

And in lots of approaches to medicine we can see analogies in the operating system case.

But I'm afraid computational irreducibility is our big enemy. And ultimately the only way to overcome it will be to have systems that outcompute what is happening to us biologically.

It's like Maxwell's demon trying to reverse the increase of entropy in thermodynamics.

It takes a lot of computation. And in an idealized model, it's infeasibly much.

But I'm somewhat hopeful that there's in a sense enough computational reducibility in our biological operation that we'll be able to create systems that outrun it.

Perhaps it'll start with drugs that effectively perform algorithms. Perhaps with "offline" analysis of genomes.

But perhaps it'll take general molecular construction—being able to create a molecule with any structure and behavior. Which is, by the way, something I think NKS can take us in the direction of doing.

Well, again, it's a strange picture in the future. We have our biological selves. And we're supplementing them by this very particular set of computational processes that are specifically molded around our biological structure—and keeping it running.

You know, from a really practical point of view, I'm guessing the first dramatic thing that's going to happen is successful cryonics.

It's a bit like cloning. For years I remember asking people about cloning mammals, and people gave all kinds of arguments about why it couldn't be done.

But then Dolly the sheep arrived—the result of discovering a weird, weird process.

Well, I'm strongly guessing it'll be the same with cryonics. That there's some weird process that'll make it just work.

It's a shame more people don't take that field more seriously. There's a breakthrough out there to be had.

And when it happens, it'll certainly change our view of time—and probably have some bizarre effects on our view of human purposes.

 

Well, OK. So how will things evolve?

Let's come back to our friends the extraterrestrials again.

We've realized that intelligence isn't nearly as special as we thought; it's sort of out there all over the universe.

But what about something human-like? Isn't the universe big enough that somewhere out there human-like intelligence will have arisen?

I think it depends how far away you allow. There's certainly all sorts of elaborate development of, say, the geology of a planet. It almost feels like the development of a civilization out there.

But, OK, it's not very human-like.

But we have this one funny data point. At least based on our current human characteristics, we'd expect that our technology would advance to the point where we can tool around the universe—and colonize it.

So why haven't we met any extraterrestrials? It seems like it'd take only one instance to eventually fill the whole universe.

Well, maybe there just aren't ones close enough to human-like. Or maybe we've got something wrong in our expectations for the future.

Perhaps exploration just isn't terribly popular. I mean, there are lots of parts of the Earth—like the bottoms of the oceans—where we could in principle go, but we usually don't bother.

But unless something kills off diversity in purposes, even if there wasn't a giant cultural push towards exploration, one might still expect one solitary extraterrestrial to decide to do it. And that'd be enough.

Of course, what really is the point? Let's assume we know the fundamental theory of physics; we know the program for our universe.

Well, there's computational irreducibility, so we can't make general predictions from it. But we can certainly use it to systematically search for possible technology that we can implement in our universe, and so on.

But in a sense we don't need physical—starships and everything—exploration to find it.

We just need to be running computations.

Well, you might think surely it'd be good to do a giant, bizarrely modified version of SETI@home all over the universe. But, you know, there are a lot more orders of magnitude that can be achieved by making things smaller than by going out and co-opting other planets to turn into computers and so on.

 

Well, OK, so we have a sort of strange view of the limiting future.

We're reduced to computations. But computations that in some absolute sense are nothing special; they're just as sophisticated or unsophisticated as lots of other computations happening around the universe.

But what's special about these computations is that they have evolved from us—with our various special features and purposes.

How will those purposes evolve? Perhaps they will in effect dissipate—and it will in a sense be the end of meaningful history.

But I have a slight—perhaps self-serving—guess.

That when our current constraints are all removed, our future selves will indeed have a difficult time knowing which of all possible purposes to pursue.

But that one of the most important guides will be to look at history. To look back at a time when there were constraints—like mortality and scarce resources—that pruned out possible purposes.

And perhaps there will be a desire to go back as far as possible—to understand the origins of purposes.

But one will need data—as much as possible—on what actually happened.

So here's the funny thing: our times, these years, are the first times in history when a decent fraction of everything that happens is recorded.

And that will only increase over the next few years.

So from the future, as one tries to analyze history and purposes, one will potentially land right on our times in these years.

So that it'll be our activities and purposes in these years that define the purposes for our whole future.

I don't know if that's actually how things will work. It's perhaps satisfying to think so. Though it's a big responsibility.

To think that our efforts at this time in history might not just be stepping stones to the future, but actually define all of it.

In effect, pulling from the computational universe that part which defines the future essence of the human condition.

 

Well, I think I should wrap up.

I hope you found this interesting, and that it didn't get too abstract.

I find all this fun. But I also like to think more seriously about how it relates to things I actually do.

And for example to my own life projects.

Well, obviously NKS is trying to tell us about everything that's out there, independent of our human condition—and giving us a paradigm to think about it all.

And Wolfram|Alpha is trying to capture the computable knowledge of our civilization—the stuff that in a sense defines what's special about the human condition.

And Mathematica is in a sense the bridge between these two—a language that makes raw, formal, precise computation accessible to us humans.

I have my next big project picked out: trying to find the fundamental theory of physics.

But if I get the chance to do more projects, it's this kind of thinking about the future that's going to determine what they are.

It's always fun at that moment when all the abstraction condenses into something very definite—and turns into something that helps us concretely define the future.

I hope I'll be back in a few decades to talk more about what happened.

Well, I should stop now. I'd be happy to discuss both abstract and concrete things.

Thanks very much.