Recently, the study of potential risks from advanced artificial intelligence has attracted substantial new funding, prompting new job openings at e.g. Oxford University and (in the near future) at Cambridge University, Imperial College London, and UC Berkeley.
This is the dawn of a new field. It’s important to fill these roles with strong candidates. The trouble is, it’s hard to find strong candidates at the dawn of a new field, because universities haven’t yet begun to train a steady flow of new experts on the topic. There is no “long-term AI safety” program for graduate students anywhere in the world.
Right now the field is pretty small, and the people I’ve spoken to (including e.g. at Oxford) seem to agree that it will be a challenge to fill these roles with candidates they already know about. Oxford has already re-posted one position, because no suitable candidates were found via the original posting.
So if you’ve developed some informal expertise on the topic — e.g. by reading books, papers, and online discussions — but you are not already known to the folks at Oxford, Cambridge, FLI, or MIRI, now would be an especially good time to de-lurk and say “I don’t know whether I’m qualified to help, and I’m not sure there’s a package of salary, benefits, and reasons that would tempt me away from what I’m doing now, but I want to at least let you know that I exist, I care about this issue, and I have at least some relevant skills and knowledge.”
Maybe you’ll turn out not to be a good candidate for any of these roles. Maybe you’ll learn the details and decide you’re not interested. But if you don’t let us know you exist, neither of those things can even begin to happen, and these important roles at the dawn of a new field will be less likely to be filled with strong candidates.
I’m especially passionate about de-lurking of this sort because when I first learned about MIRI, I just assumed I wasn’t qualified to help out, and wouldn’t want to, anyway. But after speaking to some folks at MIRI, it turned out I really could help out, and I’m glad I did. (I was MIRI’s Executive Director for ~3.5 years.)
So if you’ve been reading and thinking about long-term AI safety issues for a while now, and you have some expertise in computer science, AI, analytic/formal philosophy, mathematics, statistics, policy, risk analysis, forecasting, or economics, and you’re not already in contact with the people at the organizations I named above, please step forward and tell us you exist.
To do so, feel free to comment on this post or email me directly (lukeprog@gmail.com), and I’ll put you in touch with the right person.
(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)