Toby Walsh has published a short new paper on the likelihood of intelligence explosion. Unfortunately, it doesn’t engage with three of the most detailed and thoughtful previous analyses on the topic.
If you want to write about the likelihood and nature of intelligence explosion, I consider the following sources required reading, in descending order of value per page (Walsh’s paper misses 2, 3, and 5):
- Bostrom (2014), chapter 4
- Yudkowsky (2013)
- AI Impacts‘ posts on intelligence explosion: one, two (both 2015)
- Chalmers (2010)
- Hanson & Yudkowsky (2013)
There are many other sources worth reading, e.g. Hutter (2012), but they don’t make my cut as “required reading.”
(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)