This poster has illustrated a simple cellular automaton rule which exhibits continuum average behaviour in the large scale limit. It is possible to construct similar rules in two dimensions, and to give various other kinds of continuum behaviour (see several CA'86 posters).
In each case, one may compare the cellular automaton rules with traditional approaches to emulating these continuum systems on digital computers. In the conventional approach, one starts from partial differential equations, then makes discrete numerical approximations to them. These approximations involve considering a discrete lattice of points. But unlike in cellular automata, each of these lattice points carries a continuous variable which represents the precise value of a continuum quantity, such as particle density, at that point. In actual computer implementations, the continuous variable is represented by a floating-point number, say 64 bits in length. The number is updated in a sequence of time steps, according to a discrete rule. The rule in general involves arithmetic operations, which cannot be carried out precisely on the finite precision number. As a result, low-order bits of the number are truncated. Numerical analysis has studied in detail the propagation of such round-off errors, and has suggested schemes which minimize their effects.
In a cellular automaton, the values of variables such as particle density are stored in a distributed fashion. It is necessary to average over a region of the system to find the values of such macroscopic variables. Each bit which contributes to this average is however treated according to a precise deterministic rule, and each bit is equally important. Nevertheless, the need for averaging introduces fluctuations in the values of measured quantities. For some systems, such as turbulent ones, only statistical averages are expected to be reproducible. But in others, such as the diffusion equation, the need for averaging represents a limit on accuracy.
One can imagine a hybrid of cellular automaton and numerical analysis schemes. Consider the case of the diffusion equation. On a lattice of sites, one stores values which consist of sequences of bits. The high-order bits are encoded digitally, so that bits can represent possible numbers of particles. The low-order bits are however encoded in unary, and correspond to individual particles. The update scheme can conserve the total number of particles.
Viewed as a numerical analysis procedure, the dynamics of the low-order bits represents a dynamics of round-off errors. Instead of systematically truncating the numbers, their low-order bits are modified according to dynamics which yields effectively random behaviour. The result is similar to random round-off, but includes a precise particle conservation law.
By adjusting the number of unary and digital bits, one can determine the tradeoffs between cellular automaton and numerical analysis approaches.
One of the major issues in numerical analysis is convergence. This is very difficult to prove for all but the simplest equations and the simplest schemes. But in cellular automata, the analogue of convergence is the process of coming to ''thermodynamic'' equilibrium. Thus the problem of ''convergence'' is related to a fundamental problem of physics.