On Facebook, AI scientist Yann LeCun recently posted the following:
<not_being_really_serious>
I have said publicly on several occasions that the purported AI Apocalypse that some people seem to be worried about is extremely unlikely to happen, and if there were any risk of it happening, it wouldn’t be for another few decades in the future. Making robots that “take over the world”, Terminator style, even if we had the technology. would require a conjunction of many stupid engineering mistakes and ridiculously bad design, combined with zero regards for safety. Sort of like building a car, not just without safety belts, but also a 1000 HP engine that you can’t turn off and no brakes.But since some people seem to be worried about it, here is an idea to reassure them: We are, even today, pretty good at building machines that have super-human intelligence for very narrow domains. You can buy a $30 toy that will beat you at chess. We have systems that can recognize obscure species of plants or breeds of dogs, systems that can answer Joepardy questions and play Go better than most humans, we can build systems that can recognize a face among millions, and your car will soon drive itself better than you can drive it. What we don’t know how to build is an artificial general intelligence (AGI). To take over the world, you would need an AGI that was specifically designed to be malevolent and unstoppable. In the unlikely event that someone builds such a malevolent AGI, what we merely need to do is build a “Narrow” AI (a specialized AI) whose only expertise and purpose is to destroy the nasty AGI. It will be much better at this than the AGI will be at defending itself against it, assuming they both have access to the same computational resources. The narrow AI will devote all its power to this one goal, while the evil AGI will have to spend some of its resources on taking over the world, or whatever it is that evil AGIs are supposed to do. Checkmate.
</not_being_really_serious>
Since LeCun has stated his skepticism about potential risks from advanced artificial intelligence in the past, I assume his “not being really serious” is meant to refer to his proposed narrow AI vs. AGI “solution,” not to his comments about risks from AGI. So, I’ll reply to his comments on risks from AGI and ignore his “not being really serious” comments about narrow AI vs. AGI.
First, LeCun says:
if there were any risk of [an “AI apocalypse”], it wouldn’t be for another few decades in the future
Yes, that’s probably right, and that’s what people like myself (former Executive Director of MIRI) and Nick Bostrom (author of Superintelligence, director of FHI) have been saying all along, as I explained here. But LeCun phrases this as though he’s disagreeing with someone.
Second, LeCun writes as though the thing people are concerned about is a malevolent AGI, even though I don’t know anyone is concerned about malevolent AI. The concern expressed in Superintelligence and elsewhere isn’t about AI malevolence, it’s about convergent instrumental goals that are incidentally harmful to human society. Or as AI scientist Stuart Russell put it:
A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.
(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)
One thing I don’t quite understand is that when people talk about the threat that an Artificially Intelligent system may pose to human, they only talk about an AGI system; and as a result optimists argue that we are far from creating an AGI system. However, I think that even a narrow AI system can pose a great of a threat to humanity, and it can be easier to create. Although you may argue that the scale of threat of an “unfriendly” narrow AI system would be orders of magnitude less than that of an “unfriendly” AGI, but it may not be wise to gloss over the threat of the former and focus on the threat of the latter. Similarly, it’s correct that a 3rd “nuclear” World War may bring extinction to humanity, but it doesn’t mean that we should get along with the threat of “normal” missiles here and there that albeit may not bring extinction, but can kill some people.
Hi Luke an Erfan,
I think Le cun an you are both right, but talking about two different things.
LeCun is talking about an AGI who has a desire for self preservation and wants to get rule the world. He thinks that that is very unlikely. I agree with him.
You or Stuart Russel talk about science accidentally creating a dangerous robot that runs amok and starts doing things litterally and not in the benefit of the Human race. Like killing people. This robot though will not act this way out of intelligence or desire to rule the world,in fact it is stupid and bad-designed. This scenario seems to me akin to te development of a deadly virus that escapes from the lab. These accidents cán happen, indeed.
It is a different case though.
Btw the danger of AI making people unemployed and other disruptive effects is real. But AI ‘s that start to work together and plots against humanity I find like Yann much less likely,
kind regards
Jan