Each step shifts every digit one position to the left. So that eventually even the most insignificant digit will have a big effect.
Well, so why does this make randomness? It makes randomness if the initial digit sequence is random. And with the usual view of real numbers, one images that a typical number picked as an initial condition would almost always have that kind of randomness.
But if one thinks in terms of information, this isn't really unsatisfactory. It's like saying that one gets randomness out because one put randomness in. And that the randomness is just coming from outside the system we're looking at.
Well, is there any other possibility? Yes. Just look at rule 30. There's no randomness from the environment--it's a deterministic rule. And there's no randomness in the initial conditions--they're just a single black cell. But still, the actual evolution of the system intrinsically generates apparent randomness. [See A New Kind of Science, page 30.]
It's a bit tricky at first to analyse this. Not least because systems often have both sensitive dependence on initial conditions, and intrinsic randomness generation. Rule 30 has sensitive dependence; the slopes of these difference patterns give its Lyapunov exponents. [See A New Kind of Science, page 251.]
And iterated maps can show intrinsic randomness generation. [See A New Kind of Science, page 150.]
But that's not what dynamical systems theory picks up. Because it's talking about all possible trajectories. Not just trajectories that have "simple to make" initial conditions.
But if you really want to explain randomness, you need to get more precise.
Well, let's go back to fluid turbulence. [See A New Kind of Science, page 377.]
Where does its randomness come from? The Navier-Stokes equations haven't told us. It could even be that we need to add Brownian motion terms. We know there are some instabilities, but we don't really know their large-scale effects. And in numerical experiments, it's hard to distinguish numerical glitches from genuine randomness.
Well, a different approach to modelling helps with that. At the lowest level a fluid just consists of a bunch of molecules. And we know that the details aren't too important--we always get the same continuum behavior. So we can try just making a minimal model for the molecules. Put them on a discrete grid, with discrete velocities, and so on. [See A New Kind of Science, page 378.]
Well, in the appropriate limit, that gives Navier-Stokes, and one can actually do nice practical fluid dynamics. [See A New Kind of Science, page 380.]
But one can also investigate some basic questions. And one finds that one can indeed get large-scale apparent randomness, without any random input. Which means there's intrinsic randomness generation going on. [See A New Kind of Science, page 30.]
One can test that in physical experiments. Because if randomness comes from the environment, or from initial conditions, it'll always be different every time one runs an experiment. But with intrinsic randomness generation, sufficiently carefully controlled experiments will give repeatable randomness, which one can look for. You know, probabilistic models are very common. But I think a lot of them really aren't necessary. And that a lot of systems can just make their own randomness, without getting it from outside.
Like in an Ising model. Where one normally assumes that spins are kicked around by a heat bath. Well, there's a simple deterministic model that seems to have the same average behavior--and that's probably much closer to an actual spin system. [See A New Kind of Science, page 982.]
It seems like there are a lot of probabilistic models where there are deterministic systems that have the same behavior. Like here's a deterministic 2D cellular automaton on a square grid that almost--though not quite--makes enough randomness to give a circle like a random walk. [See A New Kind of Science, page 178.]
By the way, I should say something about what I mean by randomness. In everyday speech, we usually call things random if we can't find a compressed description of them. Well, the ultimate is algorithmic randomness, where there just is no compressed description: there's no program shorter than a sequence that will reproduce it. It's an interesting definition. But it probably says that nothing in our universe can be considered random. Certainly not a rule 30 sequence. Because after all, it's generated by a very small program.
But the point is that we can't find that program. Statistical analysis, data compression, visual perception, cryptanalysis--none of them can crack rule 30. [See A New Kind of Science, page 561.]
So that's why its sequences seem random. And one can make that more precise by talking about analysis with all possible simple programs.
This turns out to be related to the Second Law of Thermodynamics: the law of entropy increase. It's easy to make a system like a cellular automaton microscopically reversible--so it's bijective, just like the laws of physics seem to be. [See A New Kind of Science, page 437.]
But the point is that intrinsic randomness generation in effect encrypts even simple initial conditions to the point where they seem typical of the ensemble--at least with respect to any reasonably simple computation. And from this one seems finally to get at least the outline of a real proof of the Second Law. As well as seeing potential exceptions. [See A New Kind of Science, page 440.]
OK, well that's a little about physics. What about biology? Where does all the complexity in biology really come from? [See A New Kind of Science, page 385.]
It's never been clear why natural selection should give it. And in fact, I think it's much more just a consequence of properties of simple programs. That if one picks a genetic program at random, it'll often show complex behavior.
Let me give you an example. Here's a mollusc shell. [See A New Kind of Science, page 423.]
Which we might have thought must be the result of some sophisticated optimization process. But actually its pattern is incredibly similar to what we get just from rule 30. [See A New Kind of Science, page 30.]
In the actual mollusc there's a line of cells that lay down pigment as the shell grows. And we can model that as a cellular automaton. [See A New Kind of Science, page 414.]
Well, let's say we just consider all possible rules of the simplest kind. Here are all the patterns they make. [See A New Kind of Science, page 424.]
And here's a remarkable fact. The classes of patterns we get seem to be just the same as in actual molluscs. [See A New Kind of Science, page 423.]
It's as if the mollusc species of the Earth just sample different possible programs, and then we see the results displayed on their shells.
You know, it usually seems hopeless to make a predictive theory of biology. Because everything seems to depend on historical accidents.
But if different species just uniformly sample simple programs, we get to make predictions just by knowing abstract properties of programs. Let me give you another example. Shapes of leaves. [See A New Kind of Science, page 403.]
They're pretty diverse. But it turns out that there's a simple program that seems to capture most of them. It's just based on successive repeated branchings. [See A New Kind of Science, page 400.]
That fill in leaf material. And get shapes that look just like actual leaves-- sometimes smooth, sometimes jagged, and so on. [See A New Kind of Science, page 402.]
Well, what happens with all possible such models? Here's a simple case where it's easy to lay out the possible shapes. [See A New Kind of Science, page 406.]
And one can summarize these in a parameter space set. [See A New Kind of Science, page 407.]
From whose properties one can deduce all sorts of things about leaves and their likely evolution.
It's actually a mathematically rather interesting set. With properties a bit like the Mandelbrot set. By the way, for people who know about such things, my set is formed by looking at whether each tree reaches a given point. If you look instead at connectedness, you get a different set that's much less interesting set. Well, there are lots of applications in biology. Mixing traditional math-- like differential geometry--with simple programs gives lots of results on how things grow. [See A New Kind of Science, page 418.]
And in molecular biology there's much to say about using primitives based on simple programs to describe biological processes.
But let me talk a little more about physics. And in particular fundamental physics.
Well, now that we've seen the things simple programs can do, we might ask the ultimate question: underneath all of physics could there just be a simple program? A little program that if run for long enough would reproduce in precise detail the whole history of our universe. In a sense the ultimate reduction of physics to an abstract system.
Well, so what might the program for the universe be like? One thing that's kind of inevitable is that very few familiar features of the universe will immediately be visible in it. There just isn't room. I mean, in a small program there's no way to fit in separate identifiable pieces that represent electrons, or gravity, or even space and time.
In fact, if the program's going to be really small, it sort of has to have the very least possible structure built in. And for example I think a cellular automaton already has far too much structure. It's got a whole rigid array of cells laid out in space. And it also separates the notion of space from the notion of states of cells. And I don't think one needs even that.
In physics, space is normally a kind of background--on top of which matter and everything else exists. But in an ultimate model I think space is the only thing one needs.
So what might space then be? I suspect it's ultimately just a bunch of points with a certain set of connections. There are various ways to formulate it. But one is just to say it's a giant trivalent network--a cubic graph. [See A New Kind of Science, page 1039.]
In which one just specifies the combinatorics of what point is connected to what.
So how does space as we know it emerge from that? The idea is that with enough nodes, it approximates a continuum. Like with molecules in a fluid. But more subtle.
Here are some networks that look like one-, two- and three-dimensional space. [See A New Kind of Science, page 477.]
But remember, all we've actually specified is how the nodes are connected. So how do we ultimately tell what dimension a network corresponds to? Just start from a node. Then look at how many distinct nodes you reach by going r connections. [See A New Kind of Science, page 477.]
That's essentially the volume of a ball in the network. And if the network is going to correspond to d-dimensional space, that volume must grow like r^d.
Here I've laid out nodes so that their x position is determined by their minimum distance from a starting point. [See A New Kind of Science, page 479.]
And you can see the different growth rates for different dimensions.
What about curved space? In a flat hexagonal lattice, the number of nodes at distance r goes exactly like r^2. [See A New Kind of Science, page 532.]
But with curvature there's a correction. That turns out to be proportional to the Ricci scalar curvature for the limiting manifold. [See A New Kind of Science, page 1050.]
Well, it's pretty common to go from manifolds to more discrete things. Like in triangulating a manifold. [See A New Kind of Science, page 533.]
But there one's still giving embedding information: saying how to lay out the points in an ambient space. In combinatorial topology one gets rid of the embedding. But one still has data on which nodes bound a face, and so on.
But in my combinatorial networks one doesn't have even that. So it's sort of remarkable how much limiting differential geometry one can get. Ricci scalars from growth rates of balls. Ricci tensors from growth rates of cylinders generated by geodesics. But, significantly, one can't directly get Riemann tensors. Well, OK, in our universe there's not only space, but also time. And in ordinary physics, one thinks of time as just another coordinate. But in programs it tends to be very different. Like in a cellular automaton. Where one moves in space by going from one cell to another. But one moves in time by actually applying the cellular automaton rule.
Well, I suspect that's closer to how things work in physics. But it won't be exactly like a cellular automaton. Because that would need a global clock to synchronize everything.
OK, so how might it actually work? Well, here's something that at first sounds kind of crazy. How about if the universe is like a Turing machine--or what I call a mobile automaton--where only one cell gets updated at each step? [See A New Kind of Science, page 487.]