Stephen Wolfram Q&A

Submit a question

Some collected questions and answers by Stephen Wolfram

Questions may be edited for brevity; see links for full questions.

July 27, 2015

From: Interview by Byron Reese, Gigaom

How many years away, in your mind, are we from AI/robot rights becoming a mainstream topic? Is this a decade, or 25 years, or …?

I think more like a decade. Look, there’s going to be skirmish issues that come up more immediately. The issues that come up more immediately are, “Are AIs responsible?” That is, if your self-driving car’s AI glitches in some way, who is really responsible for that? That’s going to be a pretty near-term issue. There’re going to be issues about, “Is the bot a slave, basically? Is the bot an owned slave, or not? And at what point is the bot responsible for its actions, as opposed to the owner of the bot? Can there be un-owned bots?” That one, just for my amusement, I’ve thought about how it would work, in the way the world was set up in the very amusing scenarios of un-owned bots, where it’s just not clear who has the right to anything with this thing, because it doesn’t have an owner. A lot of the legal system is set up to depend on—okay, companies were one example. There was a time when it was just, “there’s a person, and they’re responsible”, and then there started to be this idea of a company.

So I think the answer is that there will be skirmish issues that come up in the fairly near term. In the kind of the big “bots as another form of intelligence on our planet”-type thing, I’m not sure… Right now, the bots don’t have advocacy, particularly, for them, but I think there will be scenarios in which that will end up happening. As I say, these things about the extent to which a bot is merely a slave of its owner. I don’t even know what happened historically with that, to what extent there was responsibility on the part of the owner. The emancipation of the bots, it’s a curious thing. Here’s another scenario. When humans die, which they will continue to do for a while, but many aspects of them are captured in bots, it will seem a little less like, “Oh, it’s just a bot. It’s just a bag of bits that’s a bot”. It will be more like, “Well, this is a bot that sort of captures some of the soul of this person, but it’s just a bot”. And maybe it’s a bot that continues to evolve on its own, independent of that particular person. And then what happens to that sort of person-seeded but evolved thought? And at what point do you start feeling, “Well, I don’t think—it isn’t really right to just say, ‘Oh, this bot is just a thing that can be switched off and that doesn’t have any expectation of protection and continued existence, and so on’”.

I think that that’s a transitional phenomenon. I think it’s going to be a long time before there is serious discussion of generic cellular automata having rights, if that ever happens. In other words, something disconnected from the history of our civilization, something that is not connected to and informed by the kinds of things—in this kind of knowledge-based, language way of creating things, that’s going to be a much more near-term issue. At some level, then, we’re back to, “Does the weather have a mind of its own?” and “Do we have to give rights to every animistic?” It’s animism turned into a legal system, and, of course, there are places where that’s effectively happening. But the justification is not, “Don’t mess with the climate. It has a soul-type thing”. That’s not really the rhetoric about that, although I suppose with the Gaia hypothesis, one might not be so far away from that.

Contact | © Stephen Wolfram, LLC