Stephen Wolfram Q&A

Submit a question

Some collected questions and answers by Stephen Wolfram

Questions may be edited for brevity; see links for full questions.

July 20, 2016

From: Reddit AMA

If Turing had died before publishing his seminal Turing machine paper in the 1930s, how much would this have delayed the construction of digital computers?

Interesting question. The idea of universal computation actually arose at about the same time in three places: Gödel’s “general recursive functions”, Turing’s “Turing machines”, and Church’s “lambda calculus”. It turned out that all these formulations are equivalent, and that was actually known pretty quickly. But Turing’s one is much easier to understand, and really was the only one that made the point of “universal computers” at all clear.

Having said that, Turing’s work was definitely not understood as being anything earth shattering when it was done. After all, there’d been other “universal” representations suggested before, notably primitive recursive functions, which had turned out actually not to be as universal as people thought. Nobody really knew what kinds of things one might be trying to represent (mathematical functions, physics, minds, …?) And it actually wasn’t until the 1980s or so (perhaps even with some impetus from my own work) that the real “universality” of Turing universal machines began to be widely accepted. (Of course, we still don’t ultimately know if the universe can be represented by a Turing machine computation; it could be [though I don’t think so] that it has extra things going on.)

But back to digital computers. There’d been a long tradition of calculators, first mechanical, then electrical. And people started thinking in a rather engineering way about how to set calculators up better. And that’s really where the first computers (Atanasoff, ENIAC, etc.) came from.

But there was another thread. Back in 1943 Warren McCulloch and Walter Pitts had written a paper about their model of neural networks. And to “prove” that their neural networks were powerful enough to do everything brains can do, they appealed to Turing machines, showing that their networks could reproduce everything a universal Turing machine can do. John von Neumann was interested in modeling how the brain works, and he was enamored of McCulloch and Pitts’ work. And then he connected Turing machines with the digital computers that were starting to exist, and for which von Neumann was a consultant. (von Neumann hadn’t taken Turing machines seriously before; he actually wrote a recommendation letter for Turing back in the 1930s, ignoring Turing’s Turing machine paper, and saying that Turing had done good work on the Central Limit Theorem in statistics…)

It was quite relevant for the early development of computer science as a field that von Neumann connected Turing machines to practical computers. But I don’t think digital computers required it. I might mention that Turing himself in the 1950s used a computer in Manchester, writing what today looks like horribly hacky code, and I don’t think he really thought about the connection of Turing machines to what he was doing. He was by then interested in biological growth processes, and—somewhat shockingly—he didn’t use anything like Turing machines (or cellular automata) to model them, even though those actually turn out to be good models. Instead, he went back to plain differential equations…

Contact | © Stephen Wolfram, LLC