Yes, please: When talking about variation in intelligence, use variation in height as a sanity-check on your intuitions.
Steven Pinker replies to a book symposium on Better Angels of Our Nature.
Dennis Pamlin (Global Challenges Foundation) and Stuart Armstrong (FHI) have issued a new 212-page report: 12 Risks that threaten human civilisation. I don’t like the “infinite impact” framing, but interesting novel contributions of the report include:
- Page 20: a graph of relations between different risks.
- Page 21: a chart of the technical and collaboration difficulty of each risk.
- Page 22: a comparison of risks by how estimable they are, by how much data is available about them, and by how much we understand the chain of events from present actions to the risk events.
- For each of the risks, a causal diagram with different levels of uncertainty for each node.
- Lots more.
The World Economic Forum’s Global Risks 2015 report discusses the superintelligence alignment challenge quite clearly — see box 2.8 on page 40.
Scott Aaronson explains what we know so far about what quantum computing could do for machine learning. It’s “a simple question with a complicated answer.”
Future of Life Institute, “A survey of research questions for robust and beneficial AI.” Because this document surveys strategic/forecasting research in more detail than the earlier “research priorities” document, it cites 7 of my own articles, and several more by others at MIRI.
Ryan Carey says
“Bhatt concludes, ‘This book could have been published with those same evolutionary
arguments on the eve of the First or Second World War or the Korean War.’ It is clear that this comment is intended as snide, but it is not clear what it means…”
Wow, scathing! As is most of Pinker’s reply.