Recently, the study of potential risks from advanced artificial intelligence has attracted substantial new funding, prompting new job openings at e.g. Oxford University and (in the near future) at Cambridge University, Imperial College London, and UC Berkeley.
This is the dawn of a new field. It’s important to fill these roles with strong candidates. The trouble is, it’s hard to find strong candidates at the dawn of a new field, because universities haven’t yet begun to train a steady flow of new experts on the topic. There is no “long-term AI safety” program for graduate students anywhere in the world.
Right now the field is pretty small, and the people I’ve spoken to (including e.g. at Oxford) seem to agree that it will be a challenge to fill these roles with candidates they already know about. Oxford has already re-posted one position, because no suitable candidates were found via the original posting.
So if you’ve developed some informal expertise on the topic — e.g. by reading books, papers, and online discussions — but you are not already known to the folks at Oxford, Cambridge, FLI, or MIRI, now would be an especially good time to de-lurk and say “I don’t know whether I’m qualified to help, and I’m not sure there’s a package of salary, benefits, and reasons that would tempt me away from what I’m doing now, but I want to at least let you know that I exist, I care about this issue, and I have at least some relevant skills and knowledge.”
Maybe you’ll turn out not to be a good candidate for any of these roles. Maybe you’ll learn the details and decide you’re not interested. But if you don’t let us know you exist, neither of those things can even begin to happen, and these important roles at the dawn of a new field will be less likely to be filled with strong candidates.
I’m especially passionate about de-lurking of this sort because when I first learned about MIRI, I just assumed I wasn’t qualified to help out, and wouldn’t want to, anyway. But after speaking to some folks at MIRI, it turned out I really could help out, and I’m glad I did. (I was MIRI’s Executive Director for ~3.5 years.)
So if you’ve been reading and thinking about long-term AI safety issues for a while now, and you have some expertise in computer science, AI, analytic/formal philosophy, mathematics, statistics, policy, risk analysis, forecasting, or economics, and you’re not already in contact with the people at the organizations I named above, please step forward and tell us you exist.
UPDATE Jan. 2, 2016: At this point in the original post, I recommended that people de-lurk by emailing me or by commenting below. However, I was contacted by far more people than I expected (100+), so I had to reply to everyone (on Dec. 19th) with a form email instead. In that email I thanked everyone for contacting me as I had requested, apologized for not being able to respond individually, and made the following request:
If you think you might be interested in a job related to long-term AI safety either now or in the next couple years, please fill out this 3-question Google form, which is a lot easier than filling out any of the posted job applications. This will make it much easier for the groups that are hiring to skim through your information and decide which people they want to contact and learn more about.
Everyone who contacted/contacts me after Dec. 19th will instead receive a link to this section of this blog post. If I’ve linked you here, please consider filling out the 3-question Google form above.
(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)