On a recent episode of the excellent Talking Machines podcast, guest Andrew Ng — one of the big names in deep learning — discussed long-term AI risk (starting at 32:35):
Ng: …There’s been this hype about AI superintelligence and evil robots taking over the world, and I think I don’t worry about that for the same reason I don’t worry about overpopulation on Mars… we haven’t set foot on the planet, and I don’t know how to productively work on that problem. I think AI today is becoming much more intelligent [but] I don’t see a realistic path right now for AI to become sentient — to become self-aware and turn evil and so on. Maybe, hundreds of years from now, someone will invent a new technology that none of us have thought of yet that would enable an AI to turn evil, and then obviously we have to act at that time, but for now, I just don’t see a way to productively work on the problem.
And the reason I don’t like the hype about evil killer robots and AI superintelligence is that I think it distracts us from a much more serious conversation about the challenge that technology poses, which is the challenge to labor…
Both Ng and the Talking Machines co-hosts talk as though Ng’s view is the mainstream view in AI, but — with respect to AGI timelines, at least — it isn’t.
In this podcast and elsewhere, Ng seems somewhat confident (>35%, maybe?) that AGI is “hundreds of years” away. This is somewhat out of sync with the mainstream of AI. In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 90% confident AGI will be built” was 2070. The median response for 50% confidence of AGI was 2050. 1
That’s a fairly large difference of opinion between the median top-notch AI scientist and Andrew Ng. Their probability distributions barely overlap at all (probably).
Of course if I was pretty confident that AGI was hundreds of years away, I would also suggest prioritizing other areas, plausibly including worries about technological unemployment. But as far as we can tell, very few top-notch AI scientists agree with Ng that AGI is probably more than a century away.
That said, I do think that most top-notch AI scientists probably would agree with Ng that it’s too early to productively tackle the AGI safety challenge, even though they’d disagree with him on AGI timelines. I think these attitudes — about whether there is productive work on the topic to be done now — are changing, but slowly.
I will also note that Ng doesn’t seem to understand the AI risks that people are concerned about. Approximately nobody is worried that AI is going to “become self-aware” and then “turn evil,” as I’ve discussed before.
(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)
Footnotes:- Technically, the survey asked for confidence of AGI conditional on no major disruption to scientific progress, but I’d be very surprised if the median top-cited AI researcher has a >5% probability of major disruption to scientific progress in the next 25 years.[↩]
While the median response was 2070, the mean response was 2168, and the standard deviation was 342 years (!). Thus, while the median top AI researcher respondent’s probability distribution barely overlaps with Ng’s, there were probably many top AI researchers who did have probability distributions similar to Ng’s. (Ng himself is ranked ~150, so he would not have been surveyed.)
Also, it is somewhat likely that AI researchers with farther-out forecasts would be less likely to respond to the survey, so the population median may be closer to Ng than the sample median.
Muller & Bostrom tested for such a selection effect and failed to find one; see the paper. Doesn’t mean there isn’t such an effect, though — their test is only weak evidence.
Fair point re: your first paragraph. I’ve tweaked the language in the post a bit.
Whether he’s out of sync with mainstream AI depends on what the 80th or 90th percentile of AI researchers thinks, not how far he is from the median, as Sam E points out.
Just FYI, I’ve tweaked the language in the post in response to Sam’s comment about this.
Even if AGI were hundreds of years away, I would still find it an important topic to investigate now because there are many tractable research areas that MIRI has already identified (e.g., decision theory) that don’t depend much on the details of what AGI eventually looks like.
If I had a sharply bounded utility function and didn’t care that much more about astronomical outcomes vs. Earth-scale outcomes, then Ng’s view would make more sense. I suspect this is a lot of the explanation of why more A(G)I researchers aren’t worried.