More replies will be added here as I remember or discover them. To focus on the “modern” discussion, I’ll somewhat-arbitrarily limit this to replies to comments or articles that were published after the release of Bostrom’s Superintelligence on Sep. 3rd, 2014. Please remind me which ones I’m forgetting.
By me
- My reply to critics in Edge.org’s “Myth of AI” discussion. (Timelines, malevolence confusion, convergent instrumental goals.)
- My reply to AI researcher Andrew Ng. (Timelines, malevolence confusion.)
- My reply to AI researcher Oren Etzioni. (Timelines, convergent instrumental goals.)
- My reply to economist Alex Tabarrok. (Timelines, glide scenario.)
- My reply to AI researcher David Buchanan. (Consciousness confusion.)
- My reply to physicist Lawrence Krauss. (Power requirements.)
- My reply to AI researcher Jeff Hawkins. (Self-replication, anthropomorphic AI, intelligence explosion, timelines.)
- My reply to AI researcher Pedro Domingos. (Consciousness confusion? Not sure.)
- My reply to AI researcher Yann LeCun. (Timelines, malevolence confusion.)
By others
- Eliezer Yudkowsky replies to Francois Chollet. (Intelligent explosion, nature of intelligence, various.)
- Matthew Graves replies to Maciej Cegłowski. (various)
- Stuart Russell replies to critics in Edge.org’s “Myth of AI” discussion. (Convergent instrumental goals.)
- Rob Bensinger replies to computer scientist Ernest Davis. (Intelligence explosion, AGI capability, value learning.)
- Rob Bensinger replies to roboticist Rodney Brooks and philosopher John Searle. (Narrow AI, timelines, malevolence confusion.)
- Scott Alexander replies to technologist and novelist Ramez Naam and others. (Mainstream acceptance of AI risks.)
- Olle Häggström replies to nuclear security specialist Edward Moore Geist. (Plausibility of superhuman AI, goal content integrity.)
- Olle Häggström replies to science writer Michael Shermer. (Malevolence confusion.)
- Olle Häggström replies to philosopher John Searle. (Consciousness confusion.)
- Olle Häggström replies to cognitive scientist Steven Pinker. (Malevolence confusion.)
- “On the Impossibility of Supersized Machines,” a parody of bad arguments commonly made against the possibility of AGI.
More relevant Scott Alexander about why it makes sense to start working now
Thanks, though this list is meant for replies that engage a very specific critic in some detail, rather than posts that are “One kind of thing people generally say is X, but here are some reasons to think not-X.” Scott’s “No time like the present for AI safety work” post is more like the latter.
Scott Alexander replies to Vox reporter Dylan Matthews (Pascal’s Mugging.)
Thanks, but I think that’s sufficiently not-specific-to-AI that I’ll leave it off the list. Also, I don’t really agree with that post by Scott. 🙂
Has anyone responded to this article by Tim Lee on Vox?
http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking
Not that I know of.
Would be more helpful to have these indexed by primarily by argument rather than by who wrote them.
I replied to Luciano Floridi’s (Professor of Philosophy and Ethics of Information) critique of AI risk on pages 9-11 of https://c.ymcdn.com/sites/www.apaonline.org/resource/collection/EADE8D52-8D02-4136-9A2A-729368501E43/ComputersV15n2.pdf (APA Newsletter on Philosophy and Computers, vol. 15, no. 2).