Lots of people are asking for more details about my decision to take a job at GiveWell, so I figured I should publish answers to the most common questions I’ve gotten, though I’m happy to also talk about it in person or by email.
Why did you take a job at GiveWell?
Apparently some people think I must have changed my mind about what I think Earth’s most urgent priorities are. So let me be clear: Nothing has changed about what I think Earth’s most urgent priorities are.
I still buy the basic argument in Friendly AI research as effective altruism. 1
I still think that growing a field of technical AI alignment research, one which takes the future seriously, is plausibly 2 the most urgent task for those seeking a desirable long-term future for Earth-originating life. 3
And I still think that MIRI has an incredibly important role to play in growing that field of technical AI alignment research.
I decided to take a research position at GiveWell mostly for personal reasons.
I have always preferred research over management. As many of you who know me in person already know, I’ve been looking for my replacement at MIRI since the day I took the Executive Director role, so that I could return to research. When doing research I very easily get into a flow state; I basically never get into a flow state doing management. I’m pretty proud of what the MIRI team accomplished during my tenure, and I could see myself being an executive somewhere again some day, but I want to do something else for a while.
Why not switch to a research role at MIRI? First, I continue to think MIRI should specialize in computer science research that I don’t have the training to do myself. 4 Second, I look forward to upgrading my research skills while working in domains where I don’t already have lots of pre-existing bias.
Did GiveWell recruit you away from MIRI?
No, I reached out to them.
I have respected GiveWell’s work for a long time, and I think they are well-positioned to have a major positive impact on the future, including (eventually) on causes such as AI alignment. 5 I also knew they would likely be a good fit for my passions and my skillset, and I’ve had lots of positive interactions with GiveWell co-founder Holden Karnofsky since GiveWell relocated to San Francisco a couple years ago.
What will you be working on at GiveWell?
I’ll be joining the Open Philanthropy Project, which aims to investigate a very broad range of potentially high-ROI philanthropic causes. I’ll be focusing on cause areas in the social sciences.
There are no current plans for me to analyze global catastrophic risks for GiveWell, but we haven’t ruled that out, either.
Why transition to GiveWell so quickly?
It seems quicker from the outside than from the inside; the Board and I wanted to get all the details of the transition worked out before we announced anything publicly. Also, I’ll remain a close MIRI advisor, and I’ll still be taking meetings with Nate after he takes the reins on June 1st, so it’s not as though we’ve had to transfer all my knowledge about MIRI before I leave.
As an EA, should I be donating to MIRI, or GiveWell, or somewhere else?
My personal job switch shouldn’t have much to say about this, so keep doing whatever you’ve been doing — that is, assuming you’ve already put serious thought into where you should donate, given your current beliefs and values. I’m going to keep donating where I’ve been donating, too.
Will you still be writing about AI?
Yes! I’ll be writing about AI and many other topics right here on my personal blog. See e.g. A reply to Wait But Why on machine superintelligence, A beginner’s guide to modern classical music, and Effective altruism as opportunity or obligation?
Footnotes:- But this doesn’t mean all EA-minded folk should be focusing their attention on the AI alignment challenge. There are lots of important things to do in the world, even if your best guess for “key lever on the long-term future” is “whether the superintelligence alignment challenge gets solved in time.” We also need to be looking for “crucial considerations” that we haven’t discovered yet.[↩]
- In this case, by “plausibly” I mean something like a plurality of my probability mass distributed over large-scale tasks that are reasonable candidates for “most urgent task for those seeking a desirable long-term future of Earth-originating life.”[↩]
- But I also think this is non-obvious, and a question on which there is legitimate disagreement among AI x-risk experts. I’ll try to write more about my (tentative) views later, but for now I’ll just note that most of the benefits (to our FAI chances) from growing a field of technical AI alignment research do not come from the technical AI alignment research itself.[↩]
- I think MIRI should continue to make an exception for high-value non-technical work that is low-overhead and mostly funded via earmarked funding. The current example of this is MIRI’s contributions to AI Impacts.[↩]
- But no, I don’t know much about their AI-related plans beyond what they’ve said publicly.[↩]
People who have read your work know that you’re a good and clear thinker, so I hope you’re thinking big about what you can achieve in research!
Start thinking about how to manage political decisions voters can’t understand. Good luck!
Brian Tomasik’s case for improving social science as a long-term future EA cause. I find it pretty persuasive (knowing social science seems to have improved my thinking a lot), and would like to see an EA organization that produced peer-reviewed blog posts by social scientists, made interesting with clickbait titles, Wait But Why style cartoons, etc. I think if you were clever about it you could potentially become the Huffington Post of science blogging and attract a bunch of celebrity scientists to write accessible blog-length articles on social science topics.