Cotton-Barratt, Allocating risk mitigation across time.
The new Ian Morris book sounds very Hansonian, which probably means it’ll end up being one of my favorite books of 2015 when I have a chance to read it.
Why do we pay pure mathematicians? A dialogue.
Watch a FiveThirtyEight article get written, keystroke by keystroke. Scott Alexander, will you please record yourself writing one blog post?
Grace, The economy of weirdness.
Kahneman interviews Harari about the future.
On March 14th, there will be wrap parties for Harry Potter and the Methods of Rationality in at least 15 different countries. I’m assuming this is another first for a fanfic.
AI stuff
YC President Sam Altman on superhuman AI: part 1, part 2. I agree with most of what he writes, the biggest exceptions being that I think (1) AGI probably isn’t the Great Filter, (2) AI progress isn’t a double exponential, and (3) I don’t have much of an opinion on the role of regulation, as it’s not something I’ve tried hard to figure out.
Stuart Russell and Rodney Brooks debated the value alignment problem at Davos 2015. (Watch at 2x speed.)
Pretty good coverage of MIRI’s value learning paper at Nautilus.
Hello sir. I just recently came upon your old Common Sense Atheism blog in a search regarding William Lane Craig. I thoroughly reject Christianity, but I am still trying to learn in the area of philosophy, edging closer to atheism than deism. However, it seems whenever I get into the more serious areas of philosophy, I am completely lost. It seems that everyone is correct and justified in their thinking, even when they are all mutually exclusive. What could you recommend for me to improve my comprehension with philosophy and critical thinking, since all I get out of it now is a headache? Thanks.
I’d recommend Yudkowsky’s “Sequences.”
Now available as a neat eBook!
https://intelligence.org/2015/03/12/rationality-ai-zombies/