Stephen Wolfram Q&ASubmit a question
Some collected questions and answers by Stephen Wolfram
Questions may be edited for brevity; see links for full questions.
May 29, 2018
From: Interview by Byron Reese, Gigaom.com
What do you think the future is going to be like, in 10 years, 20, 50, 100?
What we will see is an increasing mirror on human condition, so to speak. That is, what we are building are things that essentially amplify any aspect of the human condition. Then it, sort of, reflects back on us. What do we want? What are the goals that we want to have achieved? It is a complicated thing because certainly AIs will in some sense run many aspects of the world. Many kinds of systems, there’s no point in having people run them. They’re going to be automated in some way or another. Saying it’s an AI is really just a fancy way of saying it’s going to be automated. Another question is, well what are the overall principles that those automated systems should follow? For example, one principle that we believe is important right now, is the “be nice to humans” principle. That seems like a good one given that we’re in charge right now, better to set things up so that it’s like, “Be nice to humans”. Even defining what it means to be nice to humans is really complicated.
I’ve been much involved in trying to use Wolfram Language as a way of describing lots of computational things and an increasing number of things about the world. I also want it to be able to describe things like legal contracts and, sort of, desires that people have. Part of the purpose of that is to provide a language that is understandable both to humans and to machines that can say what it is we want to have happen, globally with AIs. What principles, what general ethical principles, and philosophical principles should AIs operate under? We had the Asimov’s Laws of Robotics, which are a very simple version of that. I think what we’re going to realize is, we need to define a Constitution for the AIs. And there won’t be just one because there aren’t just one set of people. Different people want different kinds of things. And we get thrown into all kinds of political philosophy issues about, should you have an infinite number of countries, effectively, in the world, each with their own AI constitution? How should that work?
One of the fun things I was thinking about recently is, in current democracies, one just has people vote on things. It’s like a multiple-choice answer. One could imagine a situation in which, and I take this mostly as a thought experiment because there are all kinds of practical issues with it, in a world where we’re not just natural language literate but also computer language literate, and where we have languages, like Wolfram Language which can actually represent real things in the world, one could imagine not just voting, I want A, B, or C, but effectively submitting a program that represents what one wants to see happen in the world. And then the election consists of taking X number of millions of programs and saying “OK, given these X number of millions of programs, let’s apply our AI Constitution to figure out, given these programs how do we want the best things to happen in the world”. Of course, you’re thrown into the precise issues of the moral philosophers and so on, of what you then want to have happen and whether you want the average happiness of the world to be higher or whether you want the minimum happiness to be at least something or whatever else.
There will be an increasing pressure on what should the law-like things, which are really going to be effectively the programs for the AIs, what should they look like. What aspects of the human condition and human preferences should they reflect? How will that work across however many billions of people there are in the world? How does that work when, for example, a lot of the thinking in the world is not done in brains but is done in some more digital form? How does it work when there is no longer… the notion of a single person, right now that’s a very clear notion. That won’t be such a clear notion when more of the thinking is done in digital form. There’s a lot to say about this.