Over the years, my colleagues and I have spoken to many machine learning researchers who, perhaps after some discussion and argument, claim to think there’s a moderate chance — a 5%, or 15%, or even a 40% chance — that AI systems will destroy human civilization in the next few decades. However, I often detect what Bryan Caplan has called a “missing mood“; a mood they would predictably exhibit if they really thought such a dire future was plausible, but which they don’t seem to exhibit. In many cases, the researcher who claims to think that medium-term existential catastrophe from AI is plausible doesn’t seem too upset or worried or sad about it, and doesn’t seem to be taking any specific actions as a result.
Not so with Elon Musk. Consider his reaction (here and here) when podcaster Joe Rogan asks about his AI doomsaying. Musk stares at the table, and takes a deep breath. He looks sad. Dejected. Fatalistic. Then he says:
I tried to convince people to slow down AI, to regulate AI. This was futile. I tried for years. Nobody listened. Nobody listened. Nobody listened… Maybe [one day] they will [listen]. So far they haven’t.
…Normally the way regulations work is very slow… Usually it’ll be something, some new technology, it will cause damage or death, there will be an outcry, there will be an investigation, years will pass, there will be some kind of insight committee, there will be rulemaking, then there will be oversight, eventually regulations. This all takes many years… This timeframe is not relevant to AI. You can’t take 10 years from the point at which it’s dangerous. It’s too late.
……I was warning everyone I could. I met with Obama, for just one reason [to talk about AI danger]. I met with Congress. I was at a meeting of all 50 governors, I talked about AI danger. I talked to everyone I could. No one seemed to realize where this was going.
Moreover, I believe Musk when he says that his ultimate purpose for founding Neuralink is to avert an AI catastrophe: “If you can’t beat it, join it.” Personally, I’m not optimistic that brain-computer interfaces can avert AI catastrophe — for roughly the reasons outlined in the BCIs section of Superintelligence ch. 2 — but Musk came to a different assessment, and I’m glad he’s trying.
Whatever my disagreements with Musk (I have plenty), it looks to me like Musk doesn’t just profess concern about AI existential risk. I think he feels it in his bones, when he wakes up in the morning, and he’s spending a significant fraction of his time and capital to try to do something about it. And for that I am grateful.