Pedro Domingos, an AI researcher at the University of Washington and the author of The Master Algorithm, on the podcast Talking Machines:
There are these fears that computers are going to get very smart and then suddenly they’ll become conscious and they’ll take over the world and enslave us or destroy us like the Terminator. This is completely absurd. But even though it’s completely absurd, you see a lot of it in the media these days…
Domingos doesn’t identify which articles he’s talking about, but nearly all the articles like this that I’ve seen lately are inspired by comments on AI risk from Stephen Hawking, Elon Musk, and Bill Gates, which in turn are (as far as I know) informed by Nick Bostrom’s Superintelligence.
None of these people, as far as I know, have expressed a concern that machines will suddenly become conscious and then take over the world. Rather, these people are concerned with the risks posed by extreme AI competence, as AI scientist Stuart Russell explains.
I don’t know what source Domingos read that talked about machines suddenly becoming conscious and taking over the world. I don’t think I’ve seen such scenarios described outside of fiction.
Anyway, in his book, Domingos does seem to be familiar with the competence concern:
The point where we could turn off all our computers without causing the collapse of modern civilization has long passed. Machine learning is the last straw: if computers can start programming themselves, all hope of controlling them is surely lost. Distinguished scientists like Stephen Hawking have called for urgent research on this issue before it’s too late.
Relax. The chances that an AI equipped with the [ultimate machine learning algorithm] will take over the world are zero. The reason is simple: unlike humans, computers don’t have a will of their own. They’re products of engineering, not evolution. Even an infinitely powerful computer would still be only an extension of our will and nothing to fear…
[AI systems] can vary what they do, even come up with surprising plans, but only in service of the goals we set them. A robot whose programmed goal is “make a good dinner” may decide to cook a steak, a bouillabaisse, or even a delicious new dish of its own creation, but it can’t decide to murder its owner any more than a car can decide to fly away.
…[The] biggest worry is that, like the proverbial genie, the machines will give us what we ask for instead of what we want. This is not a hypothetical scenario; learning algorithms do it all the time. We train a neural network to recognize horses, but it learns instead to recognize brown patches, because all the horses in its training set happened to be brown.
I was curious to see what his rebuttal to the competence concern (“machines will give us what we ask for instead of what we want”) was, but this section just ends with:
People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.
Which isn’t very clarifying, especially since elsewhere he writes that “any sufficiently advanced AI is indistinguishable from God.”
The next section is about Kurzweil’s law of accelerating returns, and doesn’t seem to address the competence concern.
So… I guess I can’t tell why Domingos thinks the chances of a global AI catastrophe are “zero,” and I can’t tell what he thinks of the basic competence concern expressed by Hawking, Musk, Gates, Russell, Bostrom, etc.
Update 05-09-2016: For additional Domingos comments on risks from advanced AI, see this episode of EconTalk, starting around minute 50.
(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)