Artificial Intelligence

(15)

February 6, 1998

From: Interview by David Stork, Hal's Legacy: 2001's Computer as Dream and Reality

Do you really think that we can get a handle on profoundly hard, high-level problems of AI—such as my favorite, scene analysis—by looking at something as “simple” as cellular automata?

Definitely. But it takes quite a shift in intuition to see how. In a sense it’s about whether one’s dealing with engineering problems or with science problems. You see, in engineering, we’re always used to setting things up so we can explicitly foresee how everything will work. And that’s a very limiting thing. Read more

February 6, 1998

From: Interview by David Stork, Hal's Legacy: 2001's Computer as Dream and Reality

Have you yourself worked much on the problem of building intelligent machines?

Well, since you ask, I’ll tell you the answer is yes. I don’t think I’ve ever mentioned it in public before. But since you asked the right question: yes, I have been interested in the problem for a very long time—probably 20 years now—and have been steadily picking away at it. Read more

May 14, 2012

From: Reddit AMA

What are your thoughts on genetic algorithms? Some researchers are looking into automated research and design, Hod Lipson being a great example. Do you think this idea has a future? Is it even advisable to let inquiry become automated?

I’m definitely interested in “automated discovery”. In fact, we have a bunch of experiments going around Wolfram|Alpha Pro—being able to “tell people something interesting” about whatever data they upload. My experience with NKS has been that incremental (e.g. genetic) algorithms don’t allow one to find the really surprising results that one can get to with exhaustive search. Read more

July 27, 2015

From: Interview by Byron Reese, Gigaom

When do you first remember hearing the term “artificial intelligence”?

That is a good question. I don’t have any idea. When I was a kid, in the 1960s in England, I think there was a prevailing assumption that it wouldn’t be long before there were automatic brains of some kind, and I certainly had books about the future at that time, Read more

July 27, 2015

From: Interview by Byron Reese, Gigaom

Would you agree that AI, up there with space travel, has kind of always been the thing of tomorrow and hasn’t advanced at the rate we thought they would?

Oh, yes. But there’s a very definite history. People assumed, when computers were first coming around, that pretty soon, we’d automate what brains do just like we’ve automated what arms and legs do, and so on. Nobody had any real intuition for how hard that might be. It turned out, for reasons that people simply didn’t understand in the ’40s, Read more

July 27, 2015

From: Interview by Byron Reese, Gigaom

What is the state of the technology? Have we built something as smart as a bird, for instance?

Well, what does it mean to make something that is as smart as X? In the history of artificial intelligence, there’s been a continuing set of tests that people have come up with. If you can do X, then we’ll know you’re as smart as humans, or something like that. Almost every X that’s been defined so far, Read more

July 27, 2015

From: Interview by Byron Reese, Gigaom

What do you think this thing we call “self-awareness and consciousness” is?

I don’t know. I think that it’s a way of describing certain kinds of behaviors, and so on. It’s a way of labeling the world, it’s a way of—okay, let me give a different example, which I think is somewhat related, which is free will. It’s not quite the same thing as consciousness and self-awareness, Read more

July 27, 2015

From: Interview by Byron Reese, Gigaom

If we are fundamentally, at our core, deterministic, but we don’t look it because the math is beyond us, what do you think emotions are? Are they real in the sense that we’re feeling? Will the computer love, and will it hate?

Here’s the terrible thing. We’re building stuff that tries to do emotion-space analysis of things, and so on, looking at whether it’s facial expressions or text or whatever else, and in effect say, “Okay, that means the dopamine level was up at this moment”, because the collective effect of what was going on with that thinking that was happening results in secretion of dopamine, Read more

July 27, 2015

From: Interview by Byron Reese, Gigaom

How many years away, in your mind, are we from AI/robot rights becoming a mainstream topic? Is this a decade, or 25 years, or …?

I think more like a decade. Look, there’s going to be skirmish issues that come up more immediately. The issues that come up more immediately are, “Are AIs responsible?” That is, if your self-driving car’s AI glitches in some way, who is really responsible for that? That’s going to be a pretty near-term issue. Read more

February 23, 2016

From: Reddit AMA

What do you think of the current “deep learning” methods? Will that fit into Wolfram software?

Yes, we’ve done a lot with these things, and will be doing a lot more. See e.g. https://www.imageidentify.com that we released a year ago. We’ve also got a lot of machine learning built directly into the Wolfram Language (and we use machine learning to automate it, so you don’t need machine-learning experts to use it). Read more

November 21, 2016

From: Interview by Sarah Lewin, Space.com

What are your thoughts on the challenge of communication in the movie Arrival?

I think that the basic notion of “what do we mean by intelligence” is—it’s very hard to have an abstract definition of intelligence that goes beyond just saying it does sophisticated computation. There’s an awful lot of stuff that does sophisticated computation, my favorite example being the weather—which many people would say has a mind of its own, Read more

March 8, 2017

From: Interview by John Horgan, Scientific American

Are autonomous machines, capable of choosing their own goals, inevitable? Is there anything we humans do that cannot—or should not—be automated?

When we see a rock fall, we could say either that it’s following a law of motion that makes it fall, or that it’s achieving the “goal” of being in a lower-potential-energy state. When machines—or for that matter, brains—operate, we can describe them either as just following their rules, or as “achieving certain goals”. Read more

March 21, 2018

From: Interview by Tim Urban, New York Magazine

Have we seen extraterrestrial intelligence? What might that mean?

Well my own suspicion is that the first form of nonhuman-aligned intelligence that we’re getting exposed to is artificial intelligence. My guess about how things will develop historically is we’ll get more and more used to the idea of AI and nonhuman intelligence and so on. Eventually, we’ll get used to the idea that there are these kind of nonhuman intelligences embodied in what’s possible in the computational universe and what we can use for AI. Read more

November 20, 2019

From: Interview by Garett Sloane, AdAge

Will there ever be a perfect AI moral arbiter?

The answer is no. We humans would have to agree this is the perfect ethical code we want to follow, and the reality and the lessons of history say that’s not something everybody is going to agree on. So, what do you actually do? You can make the final ranking of content be something that users are not trusting these platforms to do but something that they’re trusting brands to do. Read more
Contact | © Stephen Wolfram, LLC