Recently, the study of potential risks from advanced artificial intelligence has attracted substantial new funding, prompting new job openings at e.g. Oxford University and (in the near future) at Cambridge University, Imperial College London, and UC Berkeley.
This is the dawn of a new field. It’s important to fill these roles with strong candidates. The trouble is, it’s hard to find strong candidates at the dawn of a new field, because universities haven’t yet begun to train a steady flow of new experts on the topic. There is no “long-term AI safety” program for graduate students anywhere in the world.
Right now the field is pretty small, and the people I’ve spoken to (including e.g. at Oxford) seem to agree that it will be a challenge to fill these roles with candidates they already know about. Oxford has already re-posted one position, because no suitable candidates were found via the original posting.
So if you’ve developed some informal expertise on the topic — e.g. by reading books, papers, and online discussions — but you are not already known to the folks at Oxford, Cambridge, FLI, or MIRI, now would be an especially good time to de-lurk and say “I don’t know whether I’m qualified to help, and I’m not sure there’s a package of salary, benefits, and reasons that would tempt me away from what I’m doing now, but I want to at least let you know that I exist, I care about this issue, and I have at least some relevant skills and knowledge.”
Maybe you’ll turn out not to be a good candidate for any of these roles. Maybe you’ll learn the details and decide you’re not interested. But if you don’t let us know you exist, neither of those things can even begin to happen, and these important roles at the dawn of a new field will be less likely to be filled with strong candidates.
I’m especially passionate about de-lurking of this sort because when I first learned about MIRI, I just assumed I wasn’t qualified to help out, and wouldn’t want to, anyway. But after speaking to some folks at MIRI, it turned out I really could help out, and I’m glad I did. (I was MIRI’s Executive Director for ~3.5 years.)
So if you’ve been reading and thinking about long-term AI safety issues for a while now, and you have some expertise in computer science, AI, analytic/formal philosophy, mathematics, statistics, policy, risk analysis, forecasting, or economics, and you’re not already in contact with the people at the organizations I named above, please step forward and tell us you exist.
UPDATE Jan. 2, 2016: At this point in the original post, I recommended that people de-lurk by emailing me or by commenting below. However, I was contacted by far more people than I expected (100+), so I had to reply to everyone (on Dec. 19th) with a form email instead. In that email I thanked everyone for contacting me as I had requested, apologized for not being able to respond individually, and made the following request:
If you think you might be interested in a job related to long-term AI safety either now or in the next couple years, please fill out this 3-question Google form, which is a lot easier than filling out any of the posted job applications. This will make it much easier for the groups that are hiring to skim through your information and decide which people they want to contact and learn more about.
Everyone who contacted/contacts me after Dec. 19th will instead receive a link to this section of this blog post. If I’ve linked you here, please consider filling out the 3-question Google form above.
(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)
Dear Luke, great post – I would like to officially announce my de-lurking and declare “Hey, I exist!” as an AI-safety concerned citizen of the world 🙂
I have a BA in Nanotechnology from the interdisciplinary nanoscience centre at Aarhus University (Denmark) and a MA in Chemistry from the PUMPkin centre for membrane pumps in cells and diseases, also at Aarhus University. Besides that, I took a Minor in Spanish Language and Literature (just because I love languages) and I make rap music in english/danish whenever time allows it. Communication is a main skill and passion for me.
My policy-affecting achievements include being responsible for collecting over 10.000 signatures in support of the new airport for my hometown Aarhus (the old one is miserable), a case I started on as an engaged citizen 2 years ago and that has now been voted yes for by the city council, backed by over 170 local companies and 15.000 people on Facebook even though no one believed it could happen just a few years ago. So I have good experience working with long-term goals and maintaining focus on a greater cause.
Today I was attending the only conference I have been able to find related to the topic of AI at Aarhus University (http://conferences.au.dk/posthuman2015/) where we have been discussing both Aesthetics, Ethics and Bio-politics of the ‘Posthuman’ – and after coming home I was reading through Nick Bostrom’s AMA on Reddit and suddenly wound up on the FHI Twitter where I saw your post.
I am currently finishing reading ‘Superintelligence’ by Bostrom in detail and have also contacted Oxford University Press to propose making a translation of the book in danish. I realized I had to read this book thoroughly after my first real eye-opener on the subject reading Tim Urban’s article here: waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html on the prospects of strong AI. After that, the last couple of months have really been all about reading up on AI and familiarizing myself with the existence of institutes like FHI, FLI, CSER and MIRI, and also planning how to reach out to some of these institutes and offer my help. My background in Nanotechnology has made it painfully clear to me what kind of consequences we could be facing if not careful enough with a strong AI. It can however be hard to study this topic without having peers (or even family or friends) who are informed to the same level on the details as one self. I am therefore also planning to contact key people at my local university, to encourage them to put focus on this field as a subject of research. So if you find my profile useful from any perspective, I would be thrilled to get to work on something more like that.
To sum up: I am very interested in this topic, I most definitely care about the situation, I think I could be able to contribute in some way or the other – and I would really like to help.
Best regards,
Erik B. Jacobsen,
M.Sc. and cultural entrepreneur
Aarhus, Denmark
Note to people reading this post: only a few hours in, it’s already the case that a dozen people have “de-lurked” to me via email.
Yay!
I’m a math major about to (hopefully) graduate in one week. I tried to take as many probability-related courses as I could and I’m very interested in AI risk. Unfortunately, I don’t know much about all the known failure modes, but I’m hoping to work on that now that I’ll have a bit more free time. I want to help however I can, with the standard caveats about travel, money, inexperience, and niche-filling of course. Just point me in the right direction or let me know what math roles need filling and I’ll start focusing on that.
I am interdisciplinary researcher with main focus in deep learning and neuroscience.
The main purpose of my life is to prevent creation of unsafe AGI .
The other main purpose is to ensure indefinite life extension for everyone who wants it, but our chances to die from unsafe AGI are much higher than our chances to die of other more natural causes.
Certainly, I’ve read “Superintelligence” and a hundred of other safety-related articles.
My main difference from other people surrounding me is:
(1) I read really great number of articles. I have made more than one thousand summaries of best modern articles in neuroscience and artificial intelligence. As my colleagues say, those articles are much easier to comprehend with my comments and my visual materials to them. I use those summaries to get a quick outlook on the whole field to combine ideas and “see the forest and not only trees”.
(2) I have state-of-the-art knowledge in both neuroscience and artificial intelligence.
I am very confident that human-level AGI would be created very soon.
In discussions with more sceptical people I usually see that they are just not aware about all factors from neuroscience, hardware development, recent AI development.
My prediction for human-level AGI creation is the end of 2017, sigma = 1 year.
I put 75% probability that this my prediction is correct.
If it isn’t then the rest 25% I put to smth like [mean = 2020, sigma = 2 years].
My main contribution to AGI safety community I see in formulating my arguments on why AGI would be created really soon. My other contribution might be about illustrating unsafety issues on some github deeplearning codes.
Also I dream about creating a website which illustrates the most amazing modern progress in deep learning. For example, there is an amazing deep learning chatbot from “a neural conversational model” by DeepMind but they don’t publish their code. It’s not that hard to write that code myself based on their article and to put it on website so as everyone can chat there. OK, now this chatbot might be still unconvincing but in a recent future it will be much better. Some of my thoughts I also formulated here: https://docs.google.com/document/d/1IDFAFTyeiMHzsygvEf8s00JaflXqIjCzmbLHu6UqNIs/edit and here https://www.facebook.com/sergej.shegurin and here http://stop-skynet.com (Sergej Shegurin is my web pseudonime).
What’s for my professional research, this recent article of me and my colleagues http://arxiv.org/abs/1511.07076 is about implementation of potentially very efficient technique of deep learning using memristors. If it succeeds it’s like having Tianhe-2 (50 petaflops) for smth like $2k.
Currently, one third of my time I read best deep learning articles from arxiv.
One third of my time I spend on AGI safety issue.
One third of my time I code deep learning AI which tries to really understand text.
(yes, it’s somewhat dangerous but I think it’s negligibly dangerous compared to DeepMind but I need coding to really understand the field of deep learning)
Also, I would be very glad to connect with people with similar view… or with dissimilar view 🙂
It’s boring and sad to be alone here in Moscow where almost nobody is interested with either AGI or AGI safety (but I’m working on that 🙂 ). My contacts are: vladimir_shakirov@phystech.edu , schw90@mail.ru , https://www.facebook.com/sergej.shegurin
Sure, why not.
I am about to graduate my undergrad with a double major in computer science and statistics. Don’t know how applicable that is, and I’m based in Australia working as a software engineer for Amazon and I’m unlikely to leave. But, I’m interested in AI, have been following less wrong and Eliezer (and slatestarcodex) for years, and have qualifications in more than one of the fields you listed, so consider this a de-lurk.
I believe, AI research should be encouraged. However, certain restrictions regarding ‘Ethics & Morality’ issues has to be carefully considered. I strongly oppose research on ‘Intelligent Weapons using AI’. I am an ardent propagandist of AI research for benefit of mankind. I have worked in certain applications of AI in my professional tenure.
I am a PhD Economist specializing in public policy and cost/benefit analysis. I wrote the cost/benefit analysis for FDA’s trans fat ban, gluten-free labeling rule, and a rule to make it harder to for terrorists to poison the food supply. My undergraduate degree is in computers, but my practical coding ability is mostly limited to statistics packages, database management, and monte carlo simulations. I have been following the AI safety debate for about six years now, mostly through LessWrong, and I have understood for years that the future of the human race will depend on what happens with AI.
It is unlikely that anyone would offer me a position better than my current GS-14 job, but I am open to collaborations and doing pro bono work to help with this issue.
Working for any one of those companies would be a dream job of mine, however I have no formal training.
I have an obsession with art, mathematics and linguistics. It is the inspiration for my artwork. I have actually considered myself a transhumanist for years now.
I’m shortly going to be a college graduate in Environmental Engineering, and I’ve been interested in the area for a while, read almost all of MIRI’s papers, and have a moderate ability to self-teach math. It feels like there’s already a glut of… how to say this… 99th percentile folks in the LW diaspora, which has kind of put me off from seriously considering that I can contribute, due to being pretty average relative to the group of people interested in this, and not being a programmer or mathematician. I may delurk in the future, after I have become stronger at the relevant skills, but at the moment, I don’t really think there’s anything I’d be able to help with in the field.
I am a software developer with interest (but no formal education) in AI, and the definition of “AI safety lurker” I read above fits me 100%.
I have been reading material and watching talks online for a while as I find the topic both deeply fascinating and extremely important to be investigated if we care about ensuring humankind will have a future at all.
I am finishing ‘SuperIntelligence’ by Nik Bostrom and I have attended several London Futurists events (e.g. http://anticipating2025.com) where AI topics were discussed. I contributed to MIRI’s 2015 Summer Fundraiser.
While delurking isn’t a problem for me – I’m pretty open about this stuff – I’m unfortunately not really the right kind of computer scientist. My BSc took me into games, and I spend more of my time fooling people into thinking systems are clever than in actually making them clever. Nearest I’ve ever gotten is a Kohonen network I built to perform fast quantizing in 4d colour spaces (collapsing 32 bit alpha colours to 8 bit indexed for PS2 games).
I recently published a book with an AI safety orientation:
http://www.amazon.com/Decision-Basis-Evaluation-Machine-Intelligence/dp/0996697101
This book reflects my engineering background and focus – I like to build things and see them work.
I have created a website with additional information about the book here:
http://stephenphershey.com/dbe-book/
The additional information includes questions and criticism I have received so far. I plan to update the website as new questions and criticism are received.
I would be happy to send a free copy of the book to anyone planning to do R&D in AI safety (especially work in decision theory). I do have a limited budget, so I may not be able to honor all requests. In any case, feel free to contact me. My email address can be obtained from:
http://stephenphershey.com/contact/
Steve Hershey
I feel that I should provide links to the Facebook AI safety study group (https://www.facebook.com/groups/1549931781889055/) and the AI safety remote collaboration spreadsheet (https://docs.google.com/spreadsheets/d/1YVfK4lqYkluUK5_jzUlxooyWEZ_SavR_5EHTk66ynbQ/edit#gid=0). These can be used to find help with studying and research. It’s worth saying that, beyond the experts who have come out of the woodwork today, there has existed and remains a small community of ‘kids who want to be AI safety researchers when they grow up’. For my part, I’ve never attended college, and I’m almost certainly incapable of making contributions at this point, but I’m studying the relevant math under the tutelage of a former math major and it’s my current primary long-term goal to contribute to this research agenda.
I looked at some of this AI safety “research” and none of it is useful. I think I’ll just keep working on exactly the kind of AI development that you consider dangerous, along with my friends. I really couldn’t care less about neo Torah scholars like Eliezer Yudkowsky.
I’m finishing Msc at MILA (University of Montreal), and starting PhD after.
I haven’t exactly been lurking, but taking every opportunity to discuss this issue with colleagues and organizing a group to discuss ethical considerations surrounding AI.
I’m seriously considering changing course to work directly on AI safety at some point soon, since I think it’s the most important thing in the world.
I don’t think technical solutions will solve the problem, however.
A few of my ideas for AI safety research:
https://medium.com/@capybaralet/a-few-research-ideas-for-ai-safety-9dce4452631e#.ixsv7lgr6
I’m applying to intern at FHI, DeepMind, GiveWell/OPP, and Google Brain for this summer.
> …please step forward and tell us you exist.
Well, it’s hard to say no to a request like that. 😉
In terms of formal training, I have a bachelor’s degree in mathematics with a bit of graduate-level knowledge (maybe about the equivalent of a probability/statistics master’s degree student going into their second year) and I’ll be getting my master’s degree in computer science later this month. I’ve done some work in more “traditional” AI – trying to adapt AI planning techniques for symbolic regression (which didn’t work that well), and genetic programming to evolve herd behavior (which worked alright, but wasn’t particularly groundbreaking). I’m currently a PhD student working on causal reasoning and inference.
Independently, I’ve taught myself a moderate amount of economics, a little bit of programming language theory and formal methods and kind of a weird smattering of philosophy (philosophy of mathematics, philosophy of mind, normative ethics and a few other things) but I’m certainly not an expert in these areas. It’s hard to for me to gauge how well I really understand these topics, but I have had interesting conversations with people who *are* experts and I seem to keep up alright without them having to dumb down the conversation for me.
Perhaps unlike most people here, my instincts say that long-term AI safety probably is *not* the problem MIRI thinks it is. I lean more towards Robin Hanson’s view that the chances of a single “hand-coded” AI suddenly becoming overwhelming powerful is low – I’d say in the neighborhood of 1%. I think the future intelligence explosion is more likely to be emulated human intelligence. On the other hand, a 1% chance of an existential risk is more than high enough for me to pay close attention.
I guess this is kind a weird “delurking”, in that I want to say I’ve been paying attention and really hope we can get some smart people working on this problem, although I still think it probably isn’t a problem. Also, I like what I’m working on now, so it would be pretty hard to tempt me away from it.
So, after that thoroughly mixed message, I’ll just conclude by saying that if you want to talk to me about AI safety, I’m very interested in what you have to say, although I might not believe it as strongly as you do.
Could you explain what you want economists for, please? MIRI’s website doesn’t mention economics in the careers section (as far as I have seen). I guess you’re looking for specialists in game theory, but I’d like more info to guide my further study choices.
See e.g. here and here.
Hi,
I’m quite interested in this area. I saw the roles recently advertised at FHI and spent a long time doting over whether I should apply. I decided not to.
Nevertheless, let me de-lurk. I have a MA in International and Development Economics, as well as Bachelor’s degrees in business, economics (first class honours) and mathematics. I’m fairly early into my career, but have built up some good inertia and I would be concerned about losing it if I take a detour into a new field. I’ve worked in governments in three different countries, in roles that required economic analysis and policy advice. I am now employed as a short-term consultant, doing research & analysis for an international development organisation.
I have a good amount of tech savvyness, but don’t think I have developed strong skills compared to, say, any seasoned programmer or CS student. I am very interested in AI issues and new technology in this space. Once upon a time, I used to wish that I’d gone down a pure CS background so I could work/research in the field of AI.
One of the minors in my business degree was information technology, which allowed me to get a few formal courses under my belt. In the past, I have also done bunch of coding in R, I have also used Matlab, Python, Java, SQL, VB.net. I enjoy this stuff a lot – but have yet to find a compelling enough case to pull me out of the economics/public policy career track that I am on.
My contacts and further details are here: http://www.linkedin.com/in/obeyesekere
This seems interesting.
I’m an industrial engineer who has worked with Intel Corp. manufacturing semi conductors. Ran my own computer company for 18 years wrote bespoke programs for many areas of industry & comerce, and recently conceptualised and implemented an Engineering Technology degree programme for a university.
I’ve worked in several countries and with a variety of cultures and religions.
I think I can make a positive contribution to this programme based on my experience.
Well, since you asked for people to post even if we’re unlikely to be able or willing to help…
I have a BS in Comp Sci with a minor in mathematics, not too long out of college and currently employed at Microsoft.
My reading on AI safety so far consists primarily of online articles, largely Eliezer’s stuff, but Bostrom’s Superintelligence is coming up on my reading list one of these days.
My interest in the subject is high and I have a decent technical skillset and learn quickly.
I worked in AI and Human Computer Interaction for the last four decades.Most recently, I worked in “Cognitive Computing” at IBM Watson Research. I started and then ran the NYNEX AI lab from 1986-1998. I am currently, I am writing a blog series about possible consequences of “The Singularity” called “Turing’s Nightmares.” https://petersironwood.wordpress.com
BS in IT. Expertise is in understanding human problems and solving them with computer-based tools. Er, I’m just a programmer.
Have read quite a lot in the areas of AI, philosophy, neuroscience, psychology, meta-ethics, math, physics, stats. I was thinking about AI risk well before Bostrom and Superintelligence, but mostly all of this was from a curious bystander viewpoint. I am interested in possibly getting involved, but completely unclear on how I could help.
Areas I want to learn more about and/or work in:
– Group decision making, especially planning and coordination problems.
– Understanding failure modes of systems, especially common patterns.
– Improving tools for mental investigation and introspection, for individuals and small groups
– Improving tools for dealing with and understanding existential risks
I currently work full time as a software developer and part time as an undergrad math student. I’ve read Superintelligence and follow MIRI’s work. While I’m unlikely to be qualified for AI research at present, I aim to become qualified over time.
I’ve been lurking since the 90’s when I programmed neural nets in college. Now I’m a US diplomat with a technology fetish who also writes fiction in his spare time (my current series is on afterlives in a simulated universe and my next novel focuses on the post-singularity consequenses for humanity)
But more importantly… starting next summer I will join the faculty of the George Washington Univeristy as a Science Technology and Foreign Policy fellow. I’ll research international implications of AI – lethal autonomous drones and impacts to international relations from knowledge worker displacement, shifting balances of trade, etc.
I’m policy heavy with a decent economics background and enough technical knowledge to bridge the gap between DC and Silicon Valley. Happy to help where I can and looking forward to joining the community more formally next summer.
More on me here… https://www.linkedin.com/in/mattchessen
Delurking for future AI Safety people.
Brief bio:
Computational Neuroscience B.S. from USC
2 years experience in Dr. Bartlett Mel’s group
Been working as a software/data engineer in SF for the past 4 years.
Submitted 3 PRs to OpenAI’s public repos in the past month. Continuing to help out there with documentation, bug fixes.
Skilling up – Python best practices, Deep Learning AWS infrastructure, fast.ai.
I’m trying to unite a peer group of driven ‘amateur’ AI safety contributors. There is so much work to be done that I don’t believe you have to be a math competition winner to make a difference. Like a metal chain, AI alignment is only as strong as its weakest link. If we can make a single alignment link a bit stronger, it could save humanity. A single line of code could save your loved ones. What can we do? I have a few ideas.
If you are TAKING ACTION, say hi to me at andrew.schreiber1@gmail.com. I’ll be hunting down the people who already replied here.