Stephen Wolfram Q&A

Submit a question

Some collected questions and answers by Stephen Wolfram

Questions may be edited for brevity; see links for full questions.

March 8, 2017

From: Interview by John Horgan, Scientific American

Are autonomous machines, capable of choosing their own goals, inevitable? Is there anything we humans do that cannot—or should not—be automated?

When we see a rock fall, we could say either that it’s following a law of motion that makes it fall, or that it’s achieving the “goal” of being in a lower-potential-energy state. When machines—or for that matter, brains—operate, we can describe them either as just following their rules, or as “achieving certain goals”. And sometimes the rules will be complicated to state, but the goals are simpler, so we’ll emphasize the description in terms of goals.

What is inevitable about future machines is that they’ll operate in ways we can’t immediately foresee. In fact, that happens all the time already; it’s what bugs in programs are all about. Will we choose to describe their behavior in terms of goals? Maybe sometimes. Not least because it’ll give us a human-like context for understanding what they’re doing.

The main thing we humans do that can’t meaningfully be automated is to decide what we ultimately want to do.

Contact | © Stephen Wolfram, LLC