Dietterich and Horvitz on AI risk

Tom Dietterich and Eric Horvitz have a new opinion piece in Communications of the ACM: Rise of Concerns about AI. Below, I comment on a few passages from the article.


Several of these speculations envision an “intelligence chain reaction,” in which an AI system is charged with the task of recursively designing progressively more intelligent versions of itself and this produces an “intelligence explosion.”

I suppose you could “charge” an advanced AI with the task of undergoing an intelligence explosion, but that seems like an incredibly reckless thing for someone to do. More often, the concern is about intelligence explosion as a logical byproduct of the convergent instrumental goal for self-improvement. Nearly all possible goals are more likely to be achieved if the AI can first improve its capabilities, whether the goal is calculating digits of Pi or optimizing a manufacturing process. This is the argument given in the book Dietterich and Horvitz cite for these concerns: Nick Bostrom’s Superintelligence.


[Intelligence explosion] runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning…

I follow this literature pretty closely, and I haven’t heard of this result. No citation is provided, so I don’t know what they’re talking about. I doubt this is the kind of thing you can show using computational complexity theory, given how under-specified the concept of intelligence explosion is.

Fortunately, Dietterich and Horvitz do advocate several lines of research to make AI systems more safe and secure, and they also say:

we believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Leave a Reply

Your email address will not be published. Required fields are marked *