Replies to people who argue against worrying about long-term AI safety risks today

More replies will be added here as I remember or discover them. To focus on the “modern” discussion, I’ll somewhat-arbitrarily limit this to replies to comments or articles that were published after the release of Bostrom’s Superintelligence on Sep. 3rd, 2014. Please remind me which ones I’m forgetting.

By me

  • My reply to critics in Edge.org’s “Myth of AI” discussion. (Timelines, malevolence confusion, convergent instrumental goals.)
  • My reply to AI researcher Andrew Ng. (Timelines, malevolence confusion.)
  • My reply to AI researcher Oren Etzioni. (Timelines, convergent instrumental goals.)
  • My reply to economist Alex Tabarrok. (Timelines, glide scenario.)
  • My reply to AI researcher David Buchanan. (Consciousness confusion.)
  • My reply to physicist Lawrence Krauss. (Power requirements.)
  • My reply to AI researcher Jeff Hawkins. (Self-replication, anthropomorphic AI, intelligence explosion, timelines.)
  • My reply to AI researcher Pedro Domingos. (Consciousness confusion? Not sure.)
  • My reply to AI researcher Yann LeCun. (Timelines, malevolence confusion.)

By others

  • Eliezer Yudkowsky replies to Francois Chollet. (Intelligent explosion, nature of intelligence, various.)
  • Matthew Graves replies to Maciej Cegłowski. (various)
  • Stuart Russell replies to critics in Edge.org’s “Myth of AI” discussion. (Convergent instrumental goals.)
  • Rob Bensinger replies to computer scientist Ernest Davis. (Intelligence explosion, AGI capability, value learning.)
  • Rob Bensinger replies to roboticist Rodney Brooks and philosopher John Searle. (Narrow AI, timelines, malevolence confusion.)
  • Scott Alexander replies to technologist and novelist Ramez Naam and others. (Mainstream acceptance of AI risks.)
  • Olle Häggström replies to nuclear security specialist Edward Moore Geist. (Plausibility of superhuman AI, goal content integrity.)
  • Olle Häggström replies to science writer Michael Shermer. (Malevolence confusion.)
  • Olle Häggström replies to philosopher John Searle. (Consciousness confusion.)
  • Olle Häggström replies to cognitive scientist Steven Pinker. (Malevolence confusion.)

 

Comments

    • Luke says

      Thanks, though this list is meant for replies that engage a very specific critic in some detail, rather than posts that are “One kind of thing people generally say is X, but here are some reasons to think not-X.” Scott’s “No time like the present for AI safety work” post is more like the latter.

    • Luke says

      Thanks, but I think that’s sufficiently not-specific-to-AI that I’ll leave it off the list. Also, I don’t really agree with that post by Scott. :)

Leave a Reply

Your email address will not be published. Required fields are marked *