(Dedicated to Gregory Chaitin on the occasion of his sixtieth birthday, these remarks attempt to capture some of the kinds of topics we have discussed over the course of many enjoyable hours and days during the past twenty-five years.)
The spectacular growth of human knowledge is perhaps the single greatest achievement in the history of civilization. But will we ever know everything?
Three centuries ago, Gottfried Leibniz had a plan. In the tradition of Aristotle, but informed by two more millennia of mathematical development, he wanted to collect and codify all human knowledge, then formalize it so everything one could ever want to know could be derived by essentially mathematical means.
He even imagined that by forming all possible combinations of statements, one could systematically generate all possible knowledge.
So what happened to this plan? And will it, or anything like it, ever be achieved?
Two major things went wrong with Leibniz's original idea.
The first was that human knowledge turned out to be a lot more difficult to formalize than he expected.
And the second was that it became clear that reducing things to mathematics wasn't enough.
Gödel's Theorem, in particular, got in the way, and showed that even if one could formulate something in terms of mathematics, there might still be no procedure for figuring out whether it was true.
Of course, some things have gone better than Leibniz might have imagined. A notable example is that it's become clear that all forms of information--not just words--can be encoded in a uniform digital way.
And--we think--all processes, either constructed or occurring in nature, can be encoded as computations.
Science has gone OK since Leibniz's time, but in some ways not great.
Descartes had thought that within a hundred years of his time there'd be a complete theory of our universe, from which everything we might want to know could be calculated.
And in some areas--especially the traditional physical sciences--there's been excellent progress. And we've been able to achieve immense amounts in engineering and technology on the basis of that progress.
But in other areas--notably the biological and social sciences--there is still rather little that we can calculate.
And even in physics, we of course don't have an ultimate theory of our universe.
What about mathematics?
In some ways it's hard to assess progress.
But I think we'd have to say that it's been mixed.
Some of the widely discussed mathematical problems of Leibniz's day have firmly been solved.
But plenty have not. Just like the Pythagoreans, we still don't know whether a perfect number can be odd, for example.
So what happened with science and mathematics? Why did they turn out to be difficult?
Did we just not have enough clever ideas? Or put enough effort into them?
I don't think so. I think there's a fundamental problem--a fundamental barrier to knowledge.
One can think about it as all being related to what I call computational irreducibility.
And it depends critically on what is probably the greatest intellectual advance of the past century: the notion of universal computation.
Here's the point of computational irreducibility. Imagine a system that evolves in a certain way.
Now ask the question: can we work out what the system will do by spending less computational effort than it takes the system itself?
Can we find a shortcut that will determine what will happen in the system without having to follow all the steps that the system itself goes through?
The great triumphs of the traditional exact sciences have essentially all been based on being able to do this.
To work out where an idealized Earth orbiting an idealized Sun will be a million years from now, we don't have to trace the Earth around a million orbits: we just have to fill a number into a formula and immediately get a result.
But the question is: will this kind of approach always work?
Look at the second picture below.
Is there a way to shortcut what is happening here, to find the outcome without explicitly following each step?
In the first picture above, it's obvious that there is.
But in the second picture, I don't think there is.
I think what is happening here is a fundamentally computationally irreducible process.
If one traces each step explicitly, there is no problem working out what will happen.
But the point is that there is no general shortcut: no way to find the outcome without doing essentially as much work as the system itself.
How can this be?
We might have thought that as our methods of mathematics and science got better, we would always be able to do progressively better.
But in a sense what that would mean is that we, as computational systems, must always be able to become progressively more powerful.
And this is where universal computation comes in.
Because what it shows is that there is an upper limit to computational capability: once one is a universal computer, one can't go any further. Because as a universal computer, one can already emulate anything any other system can do.
So if the system one's looking at is a universal computer, it's inevitable that one can't find a shortcut for what it does.
But the question for science--and for knowledge in general--is how often the systems one's looking at are universal, and really behave in computationally sophisticated ways.
The traditional successes of the exact sciences are about cases where the systems one's looking at are computationally quite simple.
And that's precisely why traditional science has been able to do what it does with them.
They're computationally reducible--and so what the science has done is to find reductions. Find things like exact formulas that give the outcome without working through the steps.
But the reason I think science hasn't been able to make more progress is precisely because there are lots of systems that aren't computationally reducible.
There are lots of systems that can--and do--perform sophisticated computations. And that are universal. And that are just as computationally sophisticated as any of the methods we’re able to use to analyze them.
So that they inevitably seem to us "complex"--and we can't work out what they will do except with an irreducible amount of computational work.
It's a very interesting question of basic science just how ubiquitous computational irreducibility really is.
It's a little confusing for us, because we're so used to concentrating on cases that happen to be computationally reducible.
Most of our existing engineering is built on systems that happen to behave in computationally reducible ways--so that we can readily work out what they'll do.
Biological evolution, as well, tends to have an easier time dealing with computationally reducible systems.
And we as humans inevitably tend to notice those aspects of systems that are computationally reducible--because that is what our powers of perception allow us to recognize.
But it is possible to do what amounts to a more unbiased study.
The basic idea is just to look at all possible simple computational systems--say all possible small programs of a certain form. In effect, to do a new kind of empirical science, and to look out into the computational universe, and see what's there.
Well, this is something I have spent a great deal of time doing. And one of the big things I've concluded is that in fact computational sophistication--and computational irreducibility--is quite ubiquitous.
Indeed, I have formulated what I call the Principle of Computational Equivalence, which in effect says that almost any time one sees behavior that does not look obviously simple, it will turn out to be of equivalent computational sophistication.
And what this means is that in a sense almost everywhere outside the places where the exact sciences have already been able to make progress, there will be fundamental limits to progress.
Certainly progress is not impossible. In fact, as a matter of principle, there must always be an infinite hierarchy of pockets of reducibility--an endless frontier for traditional science.
But there will also be lots of computational irreducibility.
Still, computational irreducibility certainly does not prevent science from being done. It just says that the expectations for what can be achieved should be different.
It puts pressure on having the simplest possible underlying models. Because it says that one has no choice but in effect just to follow every step in their behavior.
As a practical matter, though, it's often perfectly possible to do that: to find by simulation what a system will do.
And that's something that generates a lot of very useful knowledge. Indeed, it's becoming an increasingly critical element in all sorts of technology and other applications of science.
So what about mathematics?
In the abstract, it's not even obvious why mathematics should be hard at all.
We've known for a hundred years that the axioms on which all our current mathematics is based are quite simple to state.
So we might have thought--as most mathematicians did in the early twentieth century--that it'd just be a question of setting up the appropriate machinery, and then we'd systematically be able to answer any question we might pose in mathematics.
But then along came Gödel's Theorem. Which showed that there exist at least some questions that can be formulated in mathematical terms, but can never be answered from its axioms.
But while Gödel's Theorem had a big effect on thinking about the foundations of mathematics, I think it's fair to say that it's had almost no effect on the actual practice of mathematics.
And in a sense this isn't surprising. Because the actual question that Gödel talked about in proving his theorem is a weird logic-type thing. It isn't a kind of question that ordinary mathematicians would ever naturally ask.
But the real heart of Gödel's Theorem is the proof that standard mathematical axiom systems are computation universal.
And a consequence of this is that mathematics can in effect show computational irreducibility.
Which is, at a fundamental level, why it can be hard to do.
And also why there can be questions in it that are formally undecidable from its axioms.
But the question is: how common is something like undecidability?
Practicing mathematicians always tend to assume it's rare.
But is that really true? Or is it just that the mathematics that gets done is mathematics that avoids undecidability?
I firmly believe it's the latter.
It's often imagined that mathematics somehow covers all arbitrary abstract systems.
But that's simply not true. And this becomes very obvious when one starts investigating the whole computational universe.
Just like one can enumerate possible programs, one can also enumerate "possible mathematicses": possible axiom systems that might be used to define mathematics.
And if one does that, one finds lots and lots of axiom systems that seem just as rich as anything in our standard mathematics.
But they're different. They're alternative mathematicses.
Now, in that space of "possible mathematicses" we can find our ordinary mathematics.
Logic--Boolean algebra--turns out for example to be about the 50,000th "possible mathematics" that we reach.
But this kind of "sighting" makes it very clear that what we call mathematics today is not some absolute thing.
It's just a particular formal system that arose historically from the arithmetic and geometry of ancient Babylon. And that happens to have grown into one of the great cultural artifacts of our civilization.
And even within our standard mathematics, there is something else that is going on: the questions that get asked in a sense always tend to keep to the region of computational reducibility.
Partly it has to do with the way generalization is done is mathematics.
The traditional methodology of mathematics puts theorems at the center of things. So when it comes to working out how to broaden mathematics, what tends to be done is to ask what broader class of things still satisfy some particular favorite theorem.
So that's how one goes from integers to real numbers, complex numbers, matrices, quaternions, and so on.
But inevitably it's the kind of generalization that still lets theorems be proved.
And it's not reaching anything like all the kinds of questions that could be asked--or that one would find just by systematically enumerating possible questions.
One knows that there are lots of famous unsolved problems in mathematics.
Particularly in areas like number theory, where it's a bit easier to formulate possible questions.
But somehow there's always been optimism that as the centuries go by, more and more of the unsolved problems will triumphantly be solved.
I doubt it. I actually suspect that we're fairly close to the edge of what's possible in mathematics. And that quite close at hand--and already in the current inventory of unsolved problems--are plenty of undecidable questions.
Mathematics has tended to be rather like engineering: one constructs things only when one can foresee how they will work.
But that doesn't mean that that's everything that's there. And from what I've seen in studying the computational universe, my intuition is that the limits to mathematical knowledge are close at hand--and can successfully be avoided only by carefully limiting the scope of mathematics.
In mathematics there has been a great emphasis on finding broad methods that in effect define whole swaths of computational reducibility.
But the point is that that computational reducibility is in many ways the exception, not the rule.
So instead, one must investigate mathematics by studying--in more specific terms--what particular systems do.
Sometimes it is argued that one can see the generality of mathematics by the way in which it successfully captures what is needed in natural science.
But the only reason for this, I believe, is that natural science has been limited too--in effect to just those kinds of phenomena that can successfully be captured by traditional mathematics!
Sometimes it is also said that, yes, there are many other questions that mathematics could study, but those questions would "not be interesting".
But really, what this is saying is just that those questions would not fit into the existing cultural framework of mathematics.
And indeed this is precisely why--to use the title of my book--one needs a new kind of science to provide the framework.
And to see how the questions relate to questions of undeniable practical interest in natural science and technology.
But OK, one can argue about what might or might not count as mathematics.
But in physics, it seems a bit more clear-cut.
Physics should be about how our universe works.
So the obvious question is: do we have a fundamental theory?
Do we have a theory that tells us exactly how our universe works?
Well, physics has progressed a long way. But we still don't have a fundamental theory.
Will we ever have one?
I think we will. And perhaps even soon.
For the last little while, it hasn't looked promising. In the nineteenth century, it looked like everything was getting wrapped up, just with mechanics, electromagnetism and gravity.
Then there were little cracks. They ended up showing us quantum mechanics.
Then quantum field theory. And so on.
In fact, at every stage when it looked like everything was wrapped up, there'd be some little problem that ended up not being so little, and inevitably making our theory of physics more complicated.
And that's made people tend to think that there just can't be a simple fundamental theory.
That somehow physics is a bottomless pit.
Well, again, from studying the computational universe, my intuition has ended up being rather different.
Because I've seen so many cases where simple rules end up generating immensely rich and complex behavior.
And that's made me think it's not nearly so implausible that our whole universe could come from a simple rule.
It's a big question, though, just how simple the rule might be.
Is it like one or two lines of Mathematica code? Or a hundred? Or a thousand?
We've got some reason to believe that it's not incredibly complicated--because in a sense then there wouldn't be any order in the universe: every particle would get to use a different part of the rule and do different things.
But is it simple enough that, say, we could search for it?
I don't know. And I haven't figured out any fundamental basis for knowing.
But it's certainly not obvious that our universe isn't quite easy to find out in the computational universe of possible universes.
There are lots of technical issues. If there's a simple rule for the universe, it--in a sense--can't have anything familiar already built in.
There just "isn't room" in the rule to, say, have a parameter for the number of dimensions of space, or the mass of the electron.
Everything has to emerge. And that means the rule has to be about very abstract things. In a sense below space, below time, and so on.
But I've got--I think--some decent ideas about ways to represent those various abstract possible rules for universes.
And I've been able to do a little bit of "universe hunting".
But, well, one quickly runs into a fundamental issue.
Given a candidate universe, it's often very obvious what it's like. Perhaps it has no notion of time. Or some trivial exponential structure for all of space.
Stuff that makes it easy to reject as being not our universe.
But then--quite quickly--one runs into candidate universes that do very complicated things.
And where it's really hard to tell if they're our universe or not.
As a practical matter, what one has to do is in a sense to recapitulate the whole history of physics.
To take a universe, and by doing experiments and theory, work out the effective physical laws that govern it.
But one has to do this automatically, and quickly.
And there's a fundamental problem: computational irreducibility.
It's possible that in my inventory of candidate universes is our very own universe. But we haven't been able to tell.
Because going from that underlying rule to the final behavior requires an irreducible amount of computational work.
The only hope is that there are enough pieces of computational reducibility to be able to tell whether what we have actually is our universe.
It’s a peculiar situation: we could in a sense already have ultimate knowledge about our universe, yet not know it.
One thing that often comes up in physics is the idea that somehow eventually one can't ever know anything with definiteness: there always have to be probabilities involved.
Well, usually when one introduces probabilities into a model, it's just a way to represent the fact that there's something missing in the model--something one doesn't know about, and is just going to assume is "random".
In quantum theory, probabilities get elevated to something more fundamental, and we're supposed to believe that there can never be definite predictions for what will happen.
Somehow that fits with some peoples' beliefs. But I don't think it scientifically has to be true.
There are all kinds of technical things--like Bell's inequality violations--that have convinced people that this probabilistic idea is real.
But actually there are technical loopholes--that I increasingly think are what's actually going on.
And in fact, I think it's likely that there really is just a single, definite, rule for our universe.
That in a sense deterministically specifies how everything in our universe happens.
It looks probabilistic because there is a lot of complicated stuff going on that we're not seeing--notably in the very structure and connectivity of space and time.
But really it's all completely deterministic.
So that in some theoretical sense we could have ultimate knowledge of what happens in the universe.
But there's a serious problem: computational irreducibility.
Even though we might know an underlying deterministic rule, we'd have to go through as much computational work as the universe to find out its consequences.
So if we're restricted--as we inevitably are--to doing that computational work within the universe, then we can't expect to "outrun" the universe, and derive knowledge any faster than just by watching what the universe actually does.
Of course, there are exceptions--patches of computational reducibility. And it's in those patches that essentially all of our current physics lies.
But how far can we expect the computational reducibility to go?
Could we for example answer questions like: "is warp drive possible?"
Some of them, probably yes.
But some of them, I expect, will be undecidable.
They'll end up--at least in their idealized form--boiling down to asking whether there exists some set of masses that can have such and such a property, that will turn out to be an undecidable question.
Normally when we do natural science, we have to be content with making models that are approximations.
And where we have to argue about whether we've managed to capture all the features that are essential for some particular purpose, or not.
But when it comes to finding an ultimate model for the universe, we get to do more than that.
We get to find a precise, exact, representation of the universe, with no approximations.
So that, in a sense, we successfully reduce all of physics to mathematics. So that we would have, in a sense, achieved Leibniz's objective--of turning every question about the world into a question about mathematics.
And this would certainly be exciting.
But at some level it would be a hollow victory: for even knowing the ultimate rule, we are still confronted with computational irreducibility.
So even though in some sense we would have achieved ultimate knowledge, our ability to use it would be fundamentally limited.
Before one knew about computational irreducibility, one might have imagined that knowing the ultimate laws of the universe would somehow immediately give one deterministic knowledge of everything--not only in natural science, but also in human affairs.
That somehow knowing the laws of the universe would tell us how humans would act--and give us a way to compute and predict human behavior.
Of course, to many people this always seemed implausible--because we feel that we have some form of free will.
And now, with computational irreducibility, we can see how this can still be consistent with deterministic underlying laws.
That even if we know these laws, there's still an irreducible distance--an irreducible amount of computation--that separates our actual behavior from them.
At various times in the history of exact science, people have thought there might be some complete predictive theory of human behavior.
And what we can now see is that in a sense there's a fundamental reason why there can't be.
So the result is that at some level to know what will happen, we just have to watch and see history unfold.
Of course, as a practical matter, what we can "watch" is becoming more and more extensive by the year.
It used to be that very little of history was recorded. A whole civilization might leave only a few megabytes, if that, behind.
But digital electronics has changed all of that. And now we can sense, probe and record immense details of many things.
Whether it's detailed images of the Earth's surface, or the content of some network of human communications, or the electrical impulses inside a brain. We can store and retrieve them all.
Increasingly, we'll even be able to go back and reproduce the past.
A few trace molecules in some archaeological site, extrapolated DNA for distant ancestors and so on.
I expect we'll be able to read the past of almost any solid surface. Every time we touch something, we disturb a few atoms.
If we repeat it enough, we'll visibly wear the solid down. But one day, we'll be able to detect just that first touch by studying the whole pattern of atoms on the surface.
There's a lot one could imagine knowing about the world.
And I think it's going to become increasingly possible to find it out, once one asks for it.
Yet just how the sensors and the systems they are able to sense will relate is an interesting issue.
One of the consequences of my Principle of Computational Equivalence is that sophisticated computation can happen in a tremendous range of systems--not just brains and computers, but also all sorts of everyday systems in nature.
And no doubt our brains--and current computers--are not especially efficient vehicles for achieving computation.
And as our technology gets better, we'll be able to do computation much better in other media. Making computation happen, for example, at the level of individual molecules in materials.
So it'll be a peculiar picture: the computations we want to do happening down at an atomic scale.
With electrons whizzing around--pretty much just like they do anyway in any material.
But somehow in a pattern that is meaningful with respect to the computations we want to do.
Now perhaps a lot of the time we may want to do "pure computation"--in a sense just purely "think".
But sometimes we'll want to interact with the world--find out knowledge from the world.
And this is where our sensors come in.
But if they too are operating at an atomic scale, it'll be just as if some clump of atoms somewhere in a material is affecting some clump of atoms somewhere else--again pretty much just like they would anyway.
It's in a sense a disappointing picture. At the end of all of our technology development, we're operating just like the rest of the universe. And from the outside, there's nothing obvious we've achieved.
You'd have to know the history and the context to know that those electrons whizzing around, and those atoms moving, were the result of the whole rich history of human civilization, and its great technology achievements.
It's a peculiar situation. But in a sense I think it reflects a core issue about ultimate knowledge.
Right now, the web contains a few billion pages of knowledge that humans have collected with considerable effort.
And one might have thought that it'd be difficult to generate more knowledge.
But it isn't.
In a sense that's what Leibniz found exciting about mathematics.
It's possible to use it systematically to generate new knowledge.
Working out new formulas, or new results, in an essentially mechanical way.
But now we can take that idea much further.
We have the whole computational universe to explore--with all possible rules. Including, for example, I believe, the rules for our physical universe.
And out in the computational universe, it's easy to generate new knowledge. Just sampling the richness of what even very simple programs can do.
In fact, given the idea of computation universality, and especially the Principle of Computational Equivalence, there is a sense in which one can imagine systematically generating all knowledge, subject only to the limitations of computational irreducibility.
But what would we do with all of this?
Why would we care to have knowledge about all those different programs out there in the computational universe?
Well, in the past we might have said the same thing about different locations on the Earth. Or different materials. Or different chemicals.
But of course, what has happened in human history is that we have systematically found ways to harness these things for our various purposes.
And so, for example, over the course of time we have found ways to use a tremendous diversity, say, of possible materials that we can "mine" from the physical world.
To find uses for magnetite, or amber, or liquid crystals, or rare earths, or radioactive materials, or whatever.
Well, so it will be with the computational universe.
It's just starting now. Within Mathematica, for example, many algorithms we use were "mined" from the computational universe. Found by searching a large space of possible programs, and picking ones that happen to be useful for our particular purposes.
In a sense defining pieces of knowledge from the sea of possibilities that end up being relevant to us as humans.
It will be interesting to watch the development of technology--as well as art and civilization in general--and to see how it explores the computational universe of possible programs.
I'm sure it'll be not unlike the case of physical materials.
There'll be techniques for mining, refining, combining. There'll be "gold rushes" as particular rich veins of programs are found.
And gradually the domain of what's considered relevant for human purposes will expand to encompass more and more of the computational universe.
But, OK, so there is all sorts of possible knowledge out there in the computational universe.
And gradually our civilization will make use of it.
But what about particular knowledge that we would like to have, today?
What about Leibniz's goal of being able to answer all human questions by somehow systematizing knowledge?
Our best way of summarizing and communicating knowledge tends to be through language.
And when mathematics became formalized, it did so essentially by emulating the symbolic structure of traditional human natural language.
And so it's interesting to see what's happened in the systematization of mathematics.
In the early 1900s, it seemed like the key thing one wanted to do was to emulate the process of mathematical proof.
That one wanted in effect to find nuggets of truth in mathematics, represented by proofs.
But actually, this really turned out not to be the point.
Instead, what was really important about the systematization of mathematics was that it let one specify calculations.
And that it let one systematically "do" mathematics automatically, by computer, as we do in Mathematica.
Well, I think the same kind of thing is going to be true for ordinary language.
For centuries, people have had the idea of somehow formalizing language: nowadays, of making something like a computer language that can talk about everyday issues.
But the question is: what is supposed to be its purpose?
Is its purpose--like the formalizations of mathematical proof--to represent true facts in the world?
If so, then to derive useful things one has to have some kind of inferencing mechanism--something that lets one go from some facts, or some theorems, to others.
And as for proof-based mathematics, there is certainly something to be done here.
But I think the much more important direction is the analog of calculation-based mathematics.
Somehow to take a formalization of everyday discourse, and calculate with it.
What could this mean?
What is the analog of taking a mathematical expression like 2+2 and evaluating it?
In a sense it is to take a statement, and work out statements that are somehow the result of it.
Out in the computational universe, there are lots of systems and processes.
And while computational irreducibility may force us to use explicit simulation to work out their results, we have a definite procedure for doing what we can do.
The issue, however, is to connect this "vast ocean of truth" with actual everyday questions.
The problem is not so much how to answer questions, as how to ask them.
As our ability to set up more and more elaborate networks of sensors increases, there will be a new approach.
We will be able to take "images" of the world, and directly map them onto systems and processes in the computational universe.
And then find out their results not by watching the systems in nature, but by abstractly studying their analogs in the computational universe.
Perhaps one day the analog of human discourse will operate more at the level of such "images".
But for now traditional language is our primary means of communicating ideas and questions.
It is in a sense the "handle" that we must use to specify aspects of the computational universe that we want to talk about.
Of course, language evolves as different things become common to talk about.
In the past, we would have had no words for talking about nested patterns. But now we just describe them as "nested" or "fractal".
But if we just take language as it is, it defines a tiny slice of the computational universe.
It is in many ways an atypical slice. For it is highly weighted towards computational reducibility.
For we, as humans, tend to concentrate on things that make sense to us, and that we can readily summarize and predict.
So at least for now only a small part of our language tends to be devoted to things we consider "random", "complex", or otherwise hard for us to make sense of.
But if we restrict ourselves to those things that we can describe with ordinary language, how far can we go in our knowledge of them?
In most directions, computational irreducibility is not far away--providing in a sense a fundamental barrier to our knowledge.
In general, everyday language is a very imprecise way to specify questions or ideas, being full of ambiguities and incomplete descriptions.
But there is, I suspect, a curious phenomenon that may be of great practical importance.
If one chooses to restrict oneself to computationally reducible issues, then this provides a constraint that makes it much easier to find a precise interpretation of language.
In other words, a question asked in ordinary language may be hard to interpret in general.
But if one chooses to interpret it only in terms of what can be computed--what can be calculated--from it, it becomes possible to make a precise interpretation.
One is doing what I believe most of traditional science has done: choosing to look only at those parts of the world on which particular methods can make progress.
But I believe we are fairly close to being able to build technology that will let us do some version of what Leibniz hoped for.
To take issues in human discourse, and when they are computable, compute them.
The web--and especially web search--has defined an important transition.
It used to be that static human knowledge--while in principle accessible through libraries and the like--was sufficiently difficult to access that a typical person usually sampled it only very sparingly.
But now it has become straightforward to find "known facts".
Using the "handle" of language, we just have to search the web for where those facts are described.
But what about facts that are not yet "known"? How can we access those?
We need to create the facts, by actual computation.
Some will be fraught with computational irreducibility, and in some fundamental sense be inaccessible.
But there will be others that we can access, at least if we can find a realistic way for us humans to refer to them.
Today there is a certain amount of computation about the world that we routinely do.
Most of it is somehow done automatically inside devices that use their own sensors to find out about the world--automatic cameras, GPS devices, and so on.
And there is a small subset of people--physicists, engineers, and the like--who fairly routinely actually do computations about the world.
But despite the celebrated history of exact science, few people have direct access to the computations it implies can be done.
I think this will change. Part of the change is already made possible with Mathematica as it is today. But another part is coming, with new technology that we are working to build.
And the consequence of it will be something that I believe will be of quite fundamental importance.
That we will finally be able routinely to access what can be computed about our everyday world.
In a sense to have ultimate access to the knowledge which it is possible to get.
Extrapolating from Leibniz, we might have hoped that we would be able to get ultimate knowledge about everything.
Somehow with our sophistication to be able to work out everything about the world, and what can happen in it.
But we now know that this will never be possible. And indeed, from looking at the computational universe, it becomes clear that there is a lot in the world that we will never be able to "unravel" in this way.
In some ways it would have been disappointing if this had been so. For it would mean that our world could somehow be simplified.
And that all the richness of what we actually see--and the actual processes that go on in nature--would be unnecessary.
And that with our ultimate knowledge we would be able to work out the true "results" of our universe, without going through everything that actually happens in the universe.
As it is, we know that this is not the case. But increasingly we can expect that whatever knowledge can in principle be obtained, we will actually be able to obtain.
To fully harness the concept of computation, and to integrate it into the future of our civilization.