Luke Muehlhauser

Replies to people who argue against worrying about long-term AI safety risks today

August 17, 2015 by Luke 8 Comments

More replies will be added here as I remember or discover them. To focus on the “modern” discussion, I’ll somewhat-arbitrarily limit this to replies to comments or articles that were published after the release of Bostrom’s Superintelligence on Sep. 3rd, 2014. Please remind me which ones I’m forgetting.

By me

  • My reply to critics in Edge.org’s “Myth of AI” discussion. (Timelines, malevolence confusion, convergent instrumental goals.)
  • My reply to AI researcher Andrew Ng. (Timelines, malevolence confusion.)
  • My reply to AI researcher Oren Etzioni. (Timelines, convergent instrumental goals.)
  • My reply to economist Alex Tabarrok. (Timelines, glide scenario.)
  • My reply to AI researcher David Buchanan. (Consciousness confusion.)
  • My reply to physicist Lawrence Krauss. (Power requirements.)
  • My reply to AI researcher Jeff Hawkins. (Self-replication, anthropomorphic AI, intelligence explosion, timelines.)
  • My reply to AI researcher Pedro Domingos. (Consciousness confusion? Not sure.)
  • My reply to AI researcher Yann LeCun. (Timelines, malevolence confusion.)

By others

  • Eliezer Yudkowsky replies to Francois Chollet. (Intelligent explosion, nature of intelligence, various.)
  • Matthew Graves replies to Maciej Cegłowski. (various)
  • Stuart Russell replies to critics in Edge.org’s “Myth of AI” discussion. (Convergent instrumental goals.)
  • Rob Bensinger replies to computer scientist Ernest Davis. (Intelligence explosion, AGI capability, value learning.)
  • Rob Bensinger replies to roboticist Rodney Brooks and philosopher John Searle. (Narrow AI, timelines, malevolence confusion.)
  • Scott Alexander replies to technologist and novelist Ramez Naam and others. (Mainstream acceptance of AI risks.)
  • Olle Häggström replies to nuclear security specialist Edward Moore Geist. (Plausibility of superhuman AI, goal content integrity.)
  • Olle Häggström replies to science writer Michael Shermer. (Malevolence confusion.)
  • Olle Häggström replies to philosopher John Searle. (Consciousness confusion.)
  • Olle Häggström replies to cognitive scientist Steven Pinker. (Malevolence confusion.)
  • “On the Impossibility of Supersized Machines,” a parody of bad arguments commonly made against the possibility of AGI.

 

Filed Under: Lists

Comments

  1. Daniel Speyer says

    August 17, 2015 at 7:18 pm

    More relevant Scott Alexander about why it makes sense to start working now

    Reply
    • Luke says

      August 17, 2015 at 8:09 pm

      Thanks, though this list is meant for replies that engage a very specific critic in some detail, rather than posts that are “One kind of thing people generally say is X, but here are some reasons to think not-X.” Scott’s “No time like the present for AI safety work” post is more like the latter.

      Reply
  2. Josef Johann says

    August 19, 2015 at 12:47 pm

    Scott Alexander replies to Vox reporter Dylan Matthews (Pascal’s Mugging.)

    Reply
    • Luke says

      August 19, 2015 at 7:56 pm

      Thanks, but I think that’s sufficiently not-specific-to-AI that I’ll leave it off the list. Also, I don’t really agree with that post by Scott. 🙂

      Reply
  3. stuart says

    August 20, 2015 at 7:41 am

    Has anyone responded to this article by Tim Lee on Vox?

    http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking

    Reply
    • Luke says

      August 23, 2015 at 5:00 pm

      Not that I know of.

      Reply
  4. Tom Ellis says

    February 29, 2016 at 10:46 pm

    Would be more helpful to have these indexed by primarily by argument rather than by who wrote them.

    Reply
  5. Kaj Sotala says

    July 15, 2017 at 2:28 am

    I replied to Luciano Floridi’s (Professor of Philosophy and Ethics of Information) critique of AI risk on pages 9-11 of https://c.ymcdn.com/sites/www.apaonline.org/resource/collection/EADE8D52-8D02-4136-9A2A-729368501E43/ComputersV15n2.pdf (APA Newsletter on Philosophy and Computers, vol. 15, no. 2).

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Lists | Quotes | Musings

RSS | About | Other Writings

Modern classical music
Modern art jazz
Favorite movies since 2009
Animal consciousness
Industrial revolution

Recommended readings

Copyright © 2023 · Luke Muehlhauser on Genesis Framework · WordPress · Log in