Reddit Ask-Me-Anything Event (AMA; 7/20/16)

Responses by Stephen Wolfram to an “Ask Me Anything” on Reddit.

Q: Is John Bell in your book? How do you explain Bell Inequality violations or even simple quantum superpositions with a Cellular Automaton model of reality? —escherbach

SW: John Bell doesn’t happen to be in the book… and unfortunately I don’t think I ever met him, though I was at CERN a few times while he was there, and I think he was in the audience for some talks I gave.

I’m not going to go into too much physics right now… but suffice it to say that my ideas about fundamental physics center on systems based on networks, NOT cellular automata. My systems have the interesting feature that they operate below space and time as we know them, and don’t exhibit the kind of locality—at least at a microscopic level—needed for Bell’s Inequality. And they actually seem to show some nice quantum-like phenomena—though it’s quite tough to know if you “have quantum mechanics” until you’ve reproduced the whole mathematical structure, which we haven’t yet. See https://www.wolframscience.com/nksonline/section-9.16 for a little more…

But let me talk about some history. Back in the late 1970s and early 1980s I talked a lot to Richard Feynman about the foundations of quantum mechanics. He’d been interested in Bell’s Inequalities, and in trying to understand what the “essence of quantum mechanics” really was. He had a derivation of Bell’s Inequalities that was all about coloring a sphere with black and white, and starting somewhere where there was black, and then seeing what color when reached an angle theta away. According to quantum mechanics, the probability of still seeing black should go down like sin(theta)2. And the question was whether there’s a static coloring of the sphere that can replicate this. It seemed like there might be some elaborate fractal coloring that could do this (think Banach–Tarski paradox or something). Feynman wasn’t a big believer in “fancy math”, but he finally got convinced by people who did fancy math that the optimal coloring arrangement was just half black and half white, which I think gives a probability that decreases like theta2—so not as fast as quantum mechanics.

Feynman was also very taken with the fact that quantum mechanics involve ei H t, and thermodynamics involved e-beta E. The similarity of those forms is critical to lots of practical quantum computations (like lattice gauge theory), but Feynman thought that this similarity might be telling one something fundamental about what quantum mechanics really is. We talked about it a lot, but nothing really got figured out.

I could also talk about the project Feynman and I did on quantum computers back in 1982 or so (before anyone had really heard of “quantum computers”). But let me go onto another question now…

Q: In writing about all of these intelligent people, did you notice any environmental factors which may have contributed to their success? —bobbylox

SW: Interesting question. Richard Feynman always used to tell me that he thought “peace of mind” was a critical prerequisite to doing creative work. But certainly not everyone in the book had that when they were doing their most important work. And actually I think if there’s one theme I noticed it’s that external stimuli and external constraints often seem to play a crucial role. People don’t seem to come up with great ideas by just sitting around and thinking for a long time. What seems much more often to happen is that there’s some external question or external issue that comes up—or the people are just trying to do some job they have—and thinking deeply about that is what leads them to do some great idea.

Q: Who in your book do you relate to most? or who do you think is most like you and why? —bambi0719

SW: Difficult question. In terms of the things I’ve been interested in, Alan Turing and Gottfried Leibniz are probably the closest. But in terms of personality, they were very different from each other, and, I think, from me. I’m basically a “long projects” person: I work on projects for decades (Mathematica, Wolfram Language, Wolfram|Alpha, A New Kind of Science, …). Alan Turing, for example, was more of a “short projects” person: he’d work on one topic for a couple of years, then move on. Of course, one of his “topics” was inventing the idea of universal computation, which later on turned out to be a hugely big deal.

Leibniz was more a “long projects” person. He spent decades trying to perfect his mechanical calculator, and he pursued his idea of a universal language for representing everything (something I’m also very interested in) for much of his life. It’s a little hard to connect with his life: he spent a lot of it wearing an oversize wig, and hanging out “at court” trying to persuade dukes etc. to support his projects. I think I would fail “at court” very quickly: I’m much more of an entrepreneur who just makes things happen myself; I’d be horrible at lobbying to get a government or whatever to do things.

Q: Interesting book; looking forward to reading it! How much commonality is there between legendary mathematicians/scientists from ages ago and more modern scientists? Culture has changed a hell of a lot between 1816 and 2016, would those legendary scientists from centuries past be significantly changed if they were brought up in this more modern environment? —d8uv

SW: Yes, things have certainly changed a lot from 1816 to 2016. (Note that the book does include quite a few recent people too.)

One important practical feature is that people are on average living longer. Ada Lovelace and Ramanujan, for example, would almost certainly have lived decades longer with modern healthcare. So would George Boole.

About the actual doing of math and science: well, we have computers now, and they make a huge difference. But another thing is that math and science as fields are much bigger. They’ve always had a certain amount of institutional structure, and many of the people I wrote about had to overcome inertia from that structure to get their ideas out. But I think in well established areas, that’s a bigger effect today. I’ve been very fortunate to be a “private scientist” outside of institutional constraints most of the time. But most scientists aren’t in that situation. And it tends to be awfully difficult to get genuinely new ideas funded etc., particularly in the better-established areas.

I have to say that the vast majority of major advances tend to come when fields are fairly new, and when there aren’t many people in them. And there are always new fields coming up. But “standard fields” that everyone’s heard of tend to be quite big by the time everyone’s heard of them.

In the big well-established fields today, lots of people work in large collaborations. But the fact is that big ideas tend to be created by individual people (or sometimes a couple of people)—not big collaborations. Those ideas often make use of lots of work done by collaborations, etc. But when there’s something new and creative being done, it’s usually done by one (or perhaps two) people. And these days I have the sense that people don’t really understand that any more: they assume that unless it’s a big collective thing, it can’t be right. It was a different story when math and science were smaller.

Q: Are there any ideas from this group of people that you initially disagreed with that you have now come to agree with? If there are many, which was the greatest change? —nswanberg

SW: Well, when I was first developing Mathematica, it was called Omega (yes, I know, now we have Wolfram|Alpha). It went through a couple of other names as well. Steve Jobs was very insistent that it should be called “Mathematica”. At first I disagreed, but later came around—and that’s of course what it’s called. I think it was a great name at the beginning—though nowadays it’s rather misleading, because “Mathematica” is really about vastly vastly more than “math”. That’s always how I’d intended it, and I built the framework from the beginning. But math was its first “killer app”. Now, of course, there’s the Wolfram Language, which is what Mathematica has evolved into…

I was pondering this question for several minutes. I feel like there have to be other examples, but I’m not thinking of them. I remember Marvin Minsky talking about self-driving cars back in the late 1970s, and I was like “yeah, whatever”. Oh yes, here’s another one. I remember long ago reading about Leibniz wanting to do what we’d now call computational law, and I was like “oh, I guess he didn’t really understand what computation is about”. But now I’m really interested in computational law, smart contracts, etc.—and I’ve realized that with the Wolfram Language we can actually implement the kinds of things Leibniz was talking about. (His PhD thesis was about resolving legal cases using logic, for example…)

Q: Hi Steph, for a beginners programmer what language do you recommend to start with ? —sangaloma

—and—

Q: Fairly certain he’d recommend his own language, of which there is a free ebook —d8uv

SW: Most definitely. Particularly check out http://blog.stephenwolfram.com/2016/01/announcing-wolfram-programming-lab/ (And, yes, it’s all free in our open cloud: https://lab.open.wolframcloud.com [Note: Wolfram Programming Lab is a legacy product.])

Q: You have an uncommon experience of being (and being around) many prominent figures in the scientific community. How has this influenced the development of the Wolfram Language?

Are there problems which were difficult to solve (historically) but can now be solved trivially using the Wolfram Language? If so, which are your favourites? —Jabernathy

SW: Designing a language that’s supposed to “know about everything” means one has to know about a lot of things oneself! It’s been absolutely crucial that I’ve been exposed to lots of different areas, and gotten to know the originators of lots of fields. At a practical level, it’s very common that I’ll want to get some judgement call on some detailed thing in some particular area. Now our company has a wide range of people who know about all sorts of different things. So my first step is just to think who in our company will know, or to consult our internal Who Knows What database. But then I’ll wonder who I can ask in the outside world. And it’s really wonderful being able to talk to experts in any possible field—and the founders of the field if they’re still around. I’ve had some really fascinating conversations that way.

About problems that become easy to solve with the Wolfram Language: yes, lots and lots and lots. People mostly just go and use Mathematica—or now the Wolfram Language—to solve problems, and I don’t hear about what they do. But it’s amazing how often I’ll be at some science or technology event and some prominent person will say “oh, yes, I invented or discovered some big thing using the Wolfram Language”… It’s really encouraging to me to hear these things—even if it’s sometimes a decade or more after it happened.

In terms of favorite uses: well, I originally started building what’s now the Wolfram Language so I could use it myself. And I’m really excited about the things I discovered with it in exploring the computational universe of simple programs—the stuff I talked about it in my book A New Kind of Science. Really everything there was made possible by what’s now the Wolfram Language.

Q: If Turing had died before publishing his seminal Turing machine paper in the 1930s, how much would this have delayed the construction of digital computers? —escherbach

SW: Interesting question. The idea of universal computation actually arose at about the same time in three places: Gödel’s “general recursive functions”, Turing’s “Turing machines”, and Church’s “lambda calculus”. It turned out that all these formulations are equivalent, and that was actually known pretty quickly. But Turing’s one is much easier to understand, and really was the only one that made the point of “universal computers” at all clear.

Having said that, Turing’s work was definitely not understood as being anything earth shattering when it was done. After all, there’d been other “universal” representations suggested before, notably primitive recursive functions, which had turned out actually not to be as universal as people thought. Nobody really knew what kinds of things one might be trying to represent (mathematical functions, physics, minds, …?) And it actually wasn’t until the 1980s or so (perhaps even with some impetus from my own work) that the real “universality” of Turing universal machines began to be widely accepted. (Of course, we still don’t ultimately know if the universe can be represented by a Turing machine computation; it could be [though I don’t think so] that it has extra things going on.)

But back to digital computers. There’d been a long tradition of calculators, first mechanical, then electrical. And people started thinking in a rather engineering way about how to set calculators up better. And that’s really where the first computers (Atanasoff, ENIAC, etc.) came from.

But there was another thread. Back in 1943 Warren McCulloch and Walter Pitts had written a paper about their model of neural networks. And to “prove” that their neural networks were powerful enough to do everything brains can do, they appealed to Turing machines, showing that their networks could reproduce everything a universal Turing machine can do. John von Neumann was interested in modeling how the brain works, and he was enamored of McCulloch and Pitts’ work. And then he connected Turing machines with the digital computers that were starting to exist, and for which von Neumann was a consultant. (von Neumann hadn’t taken Turing machines seriously before; he actually wrote a recommendation letter for Turing back in the 1930s, ignoring Turing’s Turing machine paper, and saying that Turing had done good work on the Central Limit Theorem in statistics…)

It was quite relevant for the early development of computer science as a field that von Neumann connected Turing machines to practical computers. But I don’t think digital computers required it. I might mention that Turing himself in the 1950s used a computer in Manchester, writing what today looks like horribly hacky code, and I don’t think he really thought about the connection of Turing machines to what he was doing. He was by then interested in biological growth processes, and—somewhat shockingly—he didn’t use anything like Turing machines (or cellular automata) to model them, even though those actually turn out to be good models. Instead, he went back to plain differential equations…

Q: Once I read on the Internet that the late Richard Feynman was skeptical of the concept of “complexity research”. Do you think he changed his mind to a certain degree after encountering your ideas? Is this mentioned in your new ‘Idea Makers‘ book? —knbknb

SW: Yes, you’re linking to a letter that Richard Feynman wrote to me! But the letter isn’t at all about the content of complexity research; it’s just about why Feynman thought it was a bad idea for me personally at age 26 to spend my time starting an institute doing complexity research. His basic argument was that I wouldn’t like running an organization, and wouldn’t be any good at it.

Well, needless to say, I ignored his advice, and did start a complexity institute. I quickly realized, though, that running an organization was just fine, but running an organization inside a university wasn’t such a good fit for me. So I started Wolfram Research, which I’ve now been running for almost 30 years. And actually, contrary to what Feynman suggested, it’s gone really well…

About the content of complexity research: Feynman was pretty excited about the things I was discovering with cellular automata in the early 1980s. I do talk about this a bit in Idea Makers. One of my favorite incidents was in 1984, when I had worked on what’s called rule 30: a very simple cellular automaton that produces seemingly random behavior. Feynman and I were together at a company in Boston where we both did consulting work, and we were looking at a giant printout of rule 30. After a while, Feynman took me aside and asked “How did you know rule 30 would do all this crazy stuff?”. I explained that I didn’t; I just did computer experiments and found it. “Now I feel much better”, he said, “I thought you had some way to figure this out”. He actually spent quite a bit of time trying to “crack” rule 30, but eventually came back and said “OK, Wolfram, I think you’re onto something”….

Q: Everyone likes to point out the scientific inconsistencies in sci-fi movies & TV shows (eg, completely ignoring the laws of physics). But what do filmmakers miss when they portray scientists and innovators themselves? —NoXReX

SW: I’m often quite shocked at how bad the portrayals of science are even in high-budget movies. Sometimes I can see that getting the science wrong is necessary in order to have the story work. But often the bad science seems to be quite gratuitous. And I have to believe that for extremely little extra effort there’d be an extra market for these movies etc. if they got the science right.

A few years back, my company made the math content for a TV show called Numb3rs—and people really seemed to appreciate the “real math” (see http://numb3rs.wolfram.com/). The Wolfram Language and Mathematica get used a lot to create effects and sometimes displays for movies (e.g. by Kip Thorne et al. for Interstellar: http://www.wolfram.com/customer-stories/academy-award-visuals-mathematica-wolfram-language.en.html).

About a year ago I happened to get involved in doing science consulting for a fairly big-budget movie that’s now called Arrival that’s supposed to be released this fall. It was an interesting experience; I’m very curious how the actual movie will come out. Who knows, perhaps some of my mannerisms will even wind up portrayed by the actor who’s a physicist in the movie… (Oliver Sacks once told me what an uncanny experience he’d had watching Robin Williams portray him in a movie and interpret his various mannerisms so well…)

Q: If you had to write this book 20 years from now, who do you think you would include? —ViolatorMachine

SW: One of my (rather macabre) requirements for people being in this particular book was that they be dead. So I don’t want to tempt fate by making too many predictions :-)

The book is based on essays I wrote over about a 10 year period. I always had an “excuse” for writing the essay. Some of them were anniversaries; some were written as obituaries. I’m planning to go on writing essays about people (I find it really interesting…) And there are all sorts of interesting anniversaries coming up.

Of course, there tends to be a horrible habit of these anniversaries colliding with other things in my life—like new versions of the Wolfram Language coming out, and so on. And the anniversaries can’t be postponed…

Q: Sometimes, career decision and change are tough, and as a scientist, did you have some doubt about leaving academia and going to businesses to pursue your entrepreneur ideas? —maiconpl01

SW: Different people are different of course. But for me what’s important is being able to have ideas, and turn them into reality. And it didn’t take me long to realize that I could do that much more effectively in an entrepreneurial business setting than in academia.

I have to say that successful academics typically operate a bit like entrepreneurs anyway—but usually with a bunch of constraints imposed by the big institutional structures they’ve embedded in.

If you’re interested in something that there’s no conceivable business model around, then academia is a good place, so long as what you want to do is popular enough there. But in the modern world, there’s a great diversity of business models to be had (crowdfunding, web monetization, etc. etc.), and an awful lot of intellectual interests probably can be done as entrepreneurial businesses.

Of course, to be an entrepreneur you have to have a certain amount of practical business sense. I’ve always thought it was all just common sense, but watching what actually happens in lots of companies, I realize it’s not as common as I might have thought…

There’s sometimes an idea that if you’re good as an academic, you’ll be bad as an entrepreneur. I think they don’t need to be correlated. Now, of course, if you’ve spent years as an academic, and believe the most important thing in life is publishing papers, that’s not going to fly in the entrepreneurial world. But entrepreneurism is—like everything else—amenable to intellectual thinking. People sometimes have a nasty habit of not (as I tend to put it) “keeping the thinking apparatus engaged” when they’re trying to think about things other than their academic specialty.

For me personally, the path from academia to entrepreneurism was a bit complicated. I started my first company while I was still firmly an academic, and in fact I considered business my “hobby” for quite a few years. At my first company (which I started when I was 21), I didn’t have enough confidence in my business abilities (I had absolutely no real business experience) to be the CEO, so I brought in a CEO. After a few years I’d realized I wasn’t as clueless about business as I thought, and by the time I started Wolfram Research, I thought I pretty much knew what I was doing. I was still a professor for a little while, though, doing the company “on the side”. But I guess by the time Mathematica was launched in 1988, I’d transitioned to being a completely full time CEO. And it’s been great. Still, in the last few years, it’s been great to do a little “extreme professoring” every year at our summer school (https://education.wolfram.com/summer)… and I’m still (at least in principle) an adjunct professor…

Q: Ramanujan faced tremendous opposition during his time working with Hardy due to his inability to conform to the practices of “traditional mathematics”. On the one hand, his insights (of spiritual origin or otherwise) ushered in new perspectives and problems, while on the other hand, on occasion he would be wrong. To what extent do you think feelings or intuitions about truth can or should play a role in the sciences and mathematics? With your answer in mind, how does that effect your perception of Ramanujan’s legacy? —akwon97

SW: I think people are a bit confused about Ramanujan, as Hardy indeed was. From what I can tell, Ramanujan’s big secret is that he was an experimental mathematician. Instead of figuring out what was true in math by proving theorems, he did experiments. Today we would do the experiments with computers (and Mathematica!), but in Ramanujan’s time, he did the experiments by calculating by hand. He got all kinds of results. But the question was: what did they mean? Was there a bigger picture, or was he just finding “random results”? (E.g. Exp[Pi Sqrt[58]] is very close to an integer.)

Ramanujan would extrapolate from his experimental results, using his intuition. Usually he would get the right answer. Sometimes his extrapolation wouldn’t be justified, and he’d get things wrong. But in general his intuition was really really good. He was essentially perceiving abstract patterns and structures that years later mathematicians would have ways to discuss abstractly, but at the time had no way to talk about.

Intuition is always important in science. Not least because it’s what tells you what to study. And I might say that in general success in science tends in my observation to be much more correlated with good strategy in figuring out what to study than with having good skills in actually doing the studying. The trick with intuition, though, is not to hang onto it too much. Yes, you may have a feeling about how something should work. But then you actually do the experiment, or calculation, or whatever. And you may find out that your intuition was wrong. It’s critical to know when to let go.

In my own life, I originally had the standard intuition that simple programs should only be able to show simple behavior. But then—back in 1981 or so—I actually did computer experiments to check this, and found out that it wasn’t true. And it was that discovery that launched everything I’ve done on A New Kind of Science, eventually made me realize that Wolfram|Alpha was possible, etc. etc.

Q: Have you read the book Innovators by Isaacson? How do you compare that book with your Idea Makers? —DataComputist

SW: I have a copy of the book, but I haven’t read it, so I can’t comment intelligently on it.

I should emphasize the subtitle of Idea Makers: “Personal Perspectives on the Lives & Ideas of Some Notable People”. Idea Makers is not a systematic account of a particular arc of history, or an analysis of the general dynamics of ideas. Instead, it’s a collection of personal perspectives on an eclectic collection of particular people. Some of the pieces are very detailed, some less so. Some are about people I personally knew; others are about historical figures. But each piece stands on its own, and can be read independent of the others.

Q: Hi, Dr. Wolfram; thank you, again, many years after the fact, for some time you spent talking with me and my classmates after we completed a course based around A New Kind of Science. I look forward to checking out Idea Makers, as well. I’m curious to see what you have to say about the women makers of history, including Lady Lovelace. In fact, here’s a bigger question, related to another asked below: what patterns, if any, did you notice in the lives of the women you researched? Do you have any insight into women’s role in intellectual history of the past, and in discovery going forward? —radicalcraft

SW: I was rather disappointed that when I came to put Idea Makers together, only one piece I’d written was about a woman (Ada Lovelace). I’m afraid that’s a feature of the historical demographics: there have in the past been fewer women than men in science. In these times, that’s changed, particularly in things like the life sciences. And I have to say I’ve noticed that when I give talks to high-school students, there’s been a noticeable trend in recent years that it’s the women more than the men who come up afterwards and seem to be the most engaged.

Q: Hi there, what is your process to begin writing pieces like this once you’ve chosen a subject? —tigerrss18

SW: It’s different in different cases. Some of the pieces I’ve written have basically been obituaries of people I knew well. What has typically happened there is that when I hear someone has died, I sit down as soon as possible and start to write. I have well organized personal archives (see e.g. http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/), so my very first step is to search my archives to remember the various interactions I’ve had over the years. Usually even for people I knew well there are parts of their lives I don’t know enough about, and then I end up contacting other people who knew them, or doing background research. (For one piece, I was asking the daughter of the subject questions, and it came to light that she didn’t know where her mother had been born. A research assistant of mine found out—along with getting the actual flight manifest from when her mother and father had come to the US together…)

For historical people my process is a little different. The most important thing as far as I’m concerned is to read primary documents. Read the papers people wrote, and as many letters and other documents as possible. It’s great in modern times that so much has been scanned and is on the web—but often I’ve still had to physically go to archives to see things, and there’s a certain additional connection one gets by actually touching the original documents and seeing them all together.

Usually when I’m working on one of these pieces I have a big pile of books that I’m going through. I have a habit of never trusting secondary sources. They can be really good, but they can also make horrible mistakes (mistranscribed documents, muddled timelines, misunderstood scientific terms, etc.)

Usually at the beginning (as with most projects) I feel a bit confused. I know the high points of a life, but I can’t really explain how the person went from here to there. My goal for myself is always to be able to explain everything. I think historians of science sometimes give up too easily, just assuming that at some point their subject had a flash of inspiration that can’t be explained. My experience—from my own life and from doing history—is that there always is a path. Ideas don’t just pop out; they are gradually made as a result of a whole sequence of steps. Sometimes there are serious mysteries: how exactly did someone get from here to there? But I don’t write the piece until I feel I’ve resolved them.

Sometimes there are quite major issues that are hard to resolve. With Ada Lovelace I really didn’t know at the beginning how much Ada had done, versus Babbage. I needed to trace things in considerable detail, looking exhaustively at lots of documents, to be sure about that. But once I was clear on the core facts, everything else fell into place, and made sense.

For the Ada Lovelace piece, I had a clear deadline: Ada’s 200th birthday on December 10, 2015. Needless to say, I left it quite late to get started. But many places were very helpful, and I soon had lots of scanned documents and so on—and after a while I thought I had gotten the whole story straight. Still, I had some nagging concerns about things I might have missed. So—at a day’s notice—I flew to England, arrived on December 8, and went straight from the airport to the British Library, then the Science Museum, then libraries in Oxford. By the end of the day I had verified what I needed to verify—as well as having found some additional things. The next day I went to Ada’s “birthday party” in Oxford, and after the dinner there, ended up staying up all night and finishing the piece. The next morning I posted it on my blog, and got on a plane to the US, going straight to our company headquarters to give my year-end summary talk for our employees… (When I was at the Museum of the History of Science in Oxford, a person helping me there had asked “when do you expect to publish what you’re working on?”. He was expecting a long timeline, and was very surprised when I said “tomorrow morning”…)

Q: Hi Stephen, Do the individuals in your book have a commonality in their approach to making life choices? Especially concerning what direction to investigate for finding new ideas. What is it? —jacobllcc

SW: They’re all different. But looked at together, one of the things that surprised me is how important “external stimuli” were to people. One might have thought that everyone would need to be “left alone to think”. But to the contrary, a lot of the people I profiled had their best ideas as a result of being stimulated by something external—and often practical—that they were involved in.

One of the things I noticed (and I’ve noticed it for myself many times too) is that even if the beginning is a seemingly mundane problem, it can often lead to a deep, intellectual, point. One of the distinguishing features of the people I profiled is that they would engage on the seemingly mundane problem, and then dig to get to the deep intellectual point.

From my own observations, people often reject the mundane problems because they don’t think the problems are intellectually worthy of them. And if they do start trying to solve the problems, they don’t dig deeper: they just come up with a particular surface-level solution. In my own life, particularly in technology development, I’m amazed at how many really interesting intellectual things have come out of actually trying to get to the bottom of what at first seem like mundane problems.

I guess it’s a slightly different direction, but I’ve noticed a kind of algorithm for making major progress. Take a field and start off by asking “what is the fundamental problem in this field?”. The vast majority of the time, particularly for older fields, people have pretty much given up on the fundamental problem. They’re working on specific smaller problems. But they figure the fundamental problem is too hard. Often it’s relegated to a mention in the introductions to textbooks, and then dropped thereafter. I think it’s interesting to just try attacking the fundamental problem, particularly if one can bring new tools, or insights from other fields, to it.

Q: Which do you think is more important to the history of science: the accomplishments of scientific geniuses or the broader research trends of the scientific community? —Empigee

SW: As in so many things, new ideas and new directions in science are most often the result of leadership by one or a small number of people. But almost always, they have to build on lots of detailed work that’s been done by many other people.

But even after someone comes up with a great idea, there’s the question of how it gets absorbed into the canon of science. And that’s a process that usually involves the broader scientific community. I say usually because there are cases where for example the idea gets used in technology, and becomes visible there, long before the scientific community really takes it seriously.

Mathematics is a great example of where the scientific community can be crucial in what happens. Someone (like Ramanujan) can discover some amazing mathematical fact. And it can be correct, and verified, and everything. But does anyone care? Well, that depends on whether the fact connects to what people understand. A random fact won’t be absorbed into the canon of science; it has to have a whole structure—a whole story—around it.

I’ve been involved in an interesting case of all this with A New Kind of Science and the exploration of the computational universe. Long before I did my work there were isolated examples of simple programs that showed complicated behavior. (Several people in Idea Makers actually saw these.) But nobody knew what they meant; they considered them curiosities and ignored them. When I started studying such things I found so many examples, that were so clear, that it became clear that there had to be a bigger story, which I think I managed to elucidate in A New Kind of Science. There’ve been thousands of papers now based on A New Kind of Science, and plenty of technology. I tried to do my part, as the “individual scientist” bringing these ideas together. Now there’s going to be a many-decade process of absorption throughout the world of science, in the process of which many important details and applications will be filled in. But I don’t think this would happen without my “individual scientist” effort.

Q: Hi Stephen, I’m a huge fan of all you have done for mathematics, and computation. You’ve obviously spent a lot of time dealing a huge gamut of personalities, egos, and intelligence of both known and up-and-coming scholars. Using that as the boundary conditions, where do you see the next unique breakthrough?

Side question: I heard that Feynman frequented strip clubs to focus on his work, did you ever join him or have heard of any unique discoveries whilst among the lesser clothed? —daGonz

SW: I’m running low on time here… so I’m not going to tackle the “next breakthrough” part. Stay tuned :-) About Feynman: no, I never went to anything like a strip club with him. With me, he always seemed a bit embarrassed about those interests of his when they came up. I have to say that from my observation (not at strip clubs), Feynman just seemed to liven up when he was around young people, and particularly younger women. Feynman had a hobby of doing drawings, often of women (his daughter even published a book collecting a bunch of them). A friend of mine was a long-time model of his—and I have to report that she always considered him a perfect, and very charming, gentleman.

One feature of Feynman was that he was always seeking out people who were “different” in some way. He was not a big fan of erudite academics, and quite often made fun of them, sometimes with various imitations and the like. (A particular imitatee was Murray Gell-Mann, the inventor of quarks, and the person who had an office two doors down from Feynman for decades…) Feynman’s choice of “different” people was often odd from my perspective. He had a particular liking for what seemed to me to be extremely fringy artistic and spiritual people. They would quite often spout complete nonsense about science, but Feynman didn’t mind. I think he found it interesting to just hear different perspectives.

I remember one time when Feynman and I were at a dinner with Werner Erhard, founder of a rather peculiar cult-like California-based organization called EST. Erhard had decided to organize a physics conference, and Feynman told me I should come. After the dinner, Feynman insisted on talking for hours about what traits people like Erhard had to have in order to let them lead people to do extreme things. He really wanted to understand a person like Erhard.

Q: Hi, this looks like a good read, but I’m a little concerned that having only mustered a “C” in GCSE maths, it might go a little over my head. I wouldn’t know a “general recursive function” if it got up in front of me and started singing “general recursive functions are here again”. Would I be punching above my weight here? —perpetual_C000009A

SW: It’s really a book about people and the stories of their lives. I don’t hold back in mentioning technical ideas, but I think the stories stand on their own even if you’re not into the technical stuff.

Q: How come “reddit ama” on wolframalpha.com doesn’t lead to this AMA? —batmansavestheday

SW: I think that’s a bug :-) We should fix it…

Q: Hey Stephen, thanks for the AMA! Ordered Idea Makers yesterday, very excited to read it.

My question is, what is the chapter of you going to look like when its all said and done? Also, any advice for a college student switching majors to computer science? :-) —tesla69

SW: For a long time I only wrote about scientific and technical things. I finally started writing about history and biography. I haven’t quite made it to full-on autobiography yet!

Still, there’s a scrapbook of some of my life & times at http://www.stephenwolfram.com/scrapbook … and a few months ago I gave a talk at the Computer History Museum on my life in technology http://blog.stephenwolfram.com/2016/04/my-life-in-technology-as-told-at-the-computer-history-museum/

I’ve been fortunate enough to do quite a few things in my life so far. And I feel like I’ve built a very good structure for doing more things. And even though I’ve got quite a few decades behind me, I actually still feel like I’m just getting started. I’ve averaged one large new project per decade; I’m hoping I have quite a few more in me…

About computer science: Just make sure you really understand computational thinking, not just programming. I’m very enthusiastic about the Wolfram Language as a vehicle for learning true computational thinking. (That’s not what I originally built it for, but it turns out to be great for that.) See e.g. https://www.wolfram.com/language/elementary-introduction/

Q: In your book, you briefly mention your connection with Jobs, who perhaps seems like he had a greater influence on you and your work (NKS), yet you write largely about Ada (who might’ve not had such a large influence). Any particular reason why? —digi-ad

SW: My book is a collection of pieces I’ve written at different times, and for different purposes. The piece on Steve Jobs I wrote the evening of the day he died. The piece on Ada I wrote for the 200th anniversary of her birth—for which I had a lot more time to prepare.

Q: During an interview with Steven Levy you proposed we could eventually become part of the box of a trillion souls. Couldn’t we already be inside of one? It’s an ancient question, but I could ask anything. So, can sentient AI prove how it was created? Sorry if it is a stupid question. —ScienceRecruit

SW: This whole “simulation argument” is a complicated philosophical muddle. Assume the universe is governed by some definite set of laws. Perhaps they’re even what I’ve been investigating, based on a network of tiny nodes “below” what we think of as space and time. What we experience is created by the processes that go on with those little nodes. It’s very much like how a virtual world is “created” by the processes going on in the computer that’s running it.

But we think of the virtual world as a “simulation” because there’s a programmer who created it; there’s an intention to create it. But that intention is just the result of processes inside the programmer’s brain (or their boss’s brain, or whatever). The question is how to compare those processes with the ones that might be operating at the lowest level in our universe. My Principle of Computational Equivalence says there ultimately isn’t any difference.

This is pretty complicated philosophy, and I’m not giving all the details here. Hopefully I’ll be able to write it out better someplace one of these days.

Q: The IT world has re-discovered Neural Networks; These days there is a lot of hype around deep learning. It’s rather quiet at the “Cellular Automata” frontier (at stackoverflow.com, ~50 questions are tagged as such). Does this bother you? —knbknb

SW: Neural networks are such a fascinating example in the history of science. They were invented pretty much fully formed in 1943. People investigated them, but multiple times concluded they “couldn’t do anything interesting”. I even spent a bunch of time myself on them around 1980, and also couldn’t get them to do anything interesting. But now, suddenly, with more powerful computers and bigger training sets, neural networks are doing things that really impress us—and the whole field is incredibly hot. (See e.g. http://blog.stephenwolfram.com/2015/05/wolfram-language-artificial-intelligence-the-image-identification-project/)

Cellular automata are in a sense much cleaner systems to study, and there’s plenty of basic science to do on them. There are also applications. And there are plenty of these—particularly in modeling, image processing, etc.

Neural networks can be thought of as combinations of lots of simple elements. Right now those simple elements are things where calculus can be used to study and train them (by the way, neural networks are a great example of calculus; perhaps the best ever). But in general those elements could be programs, like cellular automata.

Many times I’ve done “algorithm discovery”, looking for cellular automata that do particular things. The only real method I’ve had for doing this is exhaustive search, often across trillions of rules. But neural networks give one hints of another approach. And I wouldn’t be surprised if some really really interesting things come out of combining neural networks with simple programs like cellular automata. Stay tuned… (Yes, convolutional neural networks are already basically continuous inhomogenous cellular automata.)

Q: How realistic do you think it is for data science as a career to become obsolete due to automization in the next twenty years? —twiwtley

SW: What’ll always be important is figuring out what questions one cares about. One of the things we’re doing with the Wolfram Language is to automate a very broad range of the actual operations one needs to do in data science. Oh, and we already have lots of machine learning capabilities for pulling out “interesting things”…

Q: How do you propose we expand STEM education in our public schools? Do you think your software can help? —mistermocha

SW: Lots to say about this. For a start, see https://lab.open.wolframcloud.com [Note: Wolfram Programming Lab is a legacy product.]

Q: How has reaction to your book on the Wolfram Language surprised you, in any ways? Have you found it has opened the doors to perhaps an unconventional audience or somebody has sculpted an interesting project that you didn’t anticipate? —RabbitHats

SW: I wrote An Elementary Introduction to the Wolfram Language (wolfr.am/eiwl) to be an introduction to programming and computational thinking for people with no programming experience and no knowledge of math. One of the surprises has been hearing that it’s being used in graduate math classes, as well as at places like banks. But my favorite surprise is the age range of its readers. I had assumed it was “high school and up”. But it turns out there are all sorts of 11 year olds reading it too (I haven’t heard of anyone younger reading it on their own). I’ve had a chance to interact with some of those 11 year olds. It’s a bit surreal talking to someone who’s learned Wolfram Language at that age well enough to be able to speak in code. (I can type code but I don’t think I can form complex nested functions in spoken form…)