Back in February, The Washington Post posted an opinion article by David Buchanan of the IBM Watson team: “No, the robots are not going to rise up and kill you.”
From the title, you might assume “Okay, I guess this isn’t about the AI risk concerns raised by MIRI, FHI, Elon Musk, etc.” But in the opening paragraph, Buchanan makes clear he is trying to respond to those concerns, by linking here and here.
I am often suspicious that many people in the “nothing to worry about” camp think they are replying to MIRI & company but are actually replying to Hollywood.
And lo, when Buchanan explains the supposed concern about AI, he doesn’t link to anything by MIRI & company, but instead he literally links to IMDB pages for movies/TV about AI:
Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy…
The entire rest of the article is about the consciousness fallacy. But of course, everyone at MIRI and FHI, and probably Musk as well, agrees that intelligence doesn’t automatically create consciousness, and that has never been what MIRI & company are worried about.
(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)