Supposedly “smart” characters (e.g. scientists) are, in movies, almost universally stupid. I assume this is true for novels as well. Thankfully Yudkowsky of HPMoR has now explained How to Write Intelligent Characters. I’m sure aspiring screenwriters will now rush to read this immediately after they finish Save the Cat.
What are philanthropic foundations for? Rob Reich, Tyler Cowen, Paul Brest and others debate.
The Scientist lists the “top 10” science retractions of 2014.
FLI has published an open letter called “Research Priorities for Robust and Beneficial Artificial Intelligence,” which says that AI progress is now quite steady or even rapid, that the societal impacts will be huge, and that therefore we need more research on how to reap AI’s benefits while avoiding its pitfalls.
Signatories include top academic AI scientists (Stuart Russell, Geoff Hinton, Tom Mitchell, Eric Horvitz, Tom Dietterich, Bart Selman, Moshe Vardi, etc.), top industry AI scientists (Peter Norvig, Yann LeCun, DeepMind’s founders, Vicarious’ founders), technology leaders (Elon Musk, Jaan Tallinn), and many others you’ve probably heard of (Stephen Hawking, Martin Rees, Joshua Greene, Sam Harris, etc.).
The attached research priorities document includes many example lines of research, including MIRI’s research agenda. (Naturally, the signatories have differing opinions about which example lines of research are most urgently needed, and most signatories probably know very little about MIRI’s agenda and thus don’t mean to necessarily endorse it in particular.)
Paul Christiano (UC Berkeley) summarizes his own ideas about long-term AI safety work that can be done now.
When people ask me what general-AI benchmark I think should replace the Turing test, I usually start by mentioning video games. This 2-page paper explains why that’s such a handy benchmark.