Over the years, my colleagues and I have spoken to many machine learning researchers who, perhaps after some discussion and argument, claim to think there’s a moderate chance — a 5%, or 15%, or even a 40% chance — that AI systems will destroy human civilization in the next few decades. 1 However, I often detect what Bryan Caplan has called a “missing mood“; a mood they would predictably exhibit if they really thought such a dire future was plausible, but which they don’t seem to exhibit. In many cases, the researcher who claims to think that medium-term existential catastrophe from AI is plausible doesn’t seem too upset or worried or sad about it, and doesn’t seem to be taking any specific actions as a result.
Not so with Elon Musk. Consider his reaction (here and here) when podcaster Joe Rogan asks about his AI doomsaying. Musk stares at the table, and takes a deep breath. He looks sad. Dejected. Fatalistic. Then he says:
I tried to convince people to slow down AI, to regulate AI. This was futile. I tried for years. Nobody listened. Nobody listened. Nobody listened… Maybe [one day] they will [listen]. So far they haven’t.
…Normally the way regulations work is very slow… Usually it’ll be something, some new technology, it will cause damage or death, there will be an outcry, there will be an investigation, years will pass, there will be some kind of insight committee, there will be rulemaking, then there will be oversight, eventually regulations. This all takes many years… This timeframe is not relevant to AI. You can’t take 10 years from the point at which it’s dangerous. It’s too late.
……I was warning everyone I could. I met with Obama, for just one reason [to talk about AI danger]. I met with Congress. I was at a meeting of all 50 governors, I talked about AI danger. I talked to everyone I could. No one seemed to realize where this was going.
Moreover, I believe Musk when he says that his ultimate purpose for founding Neuralink is to avert an AI catastrophe: “If you can’t beat it, join it.” Personally, I’m not optimistic that brain-computer interfaces can avert AI catastrophe — for roughly the reasons outlined in the BCIs section of Superintelligence ch. 2 — but Musk came to a different assessment, and I’m glad he’s trying.
Whatever my disagreements with Musk (I have plenty), it looks to me like Musk doesn’t just profess concern about AI existential risk. 2 I think he feels it in his bones, when he wakes up in the morning, and he’s spending a significant fraction of his time and capital to try to do something about it. And for that I am grateful.
Footnotes:
My working theory is that people have an innate biochemically induced and near-static level of anxiety and concern, that can be briefly elevated or reduced in exceptional circumstances but tends back to the norm over months and years. And with the exception of a few high achieving workaholics and other generally dissatisfied and stressed out types (high in trait neuroticism+conscientiousness), like Musk, that average level of anxiety is very low. People can be confronted with a new and serious existential risk – even to the point of a death sentence and within a few weeks or months their level of anxiety has ebbed to normal. We are not rational beings, we are rationalising, and fatalism is a very comfortable set of shoes for adults who have already embraced their mortality.
I too (like probably most here) had my period of rage against the machines, haranguing all my acquaintances, who just took it with bemused tolerance and then basically ignored it – I think their and general populace’s slow adjustment to necessities of Covid in Jan-Feb 2020, while I was prepping for lockdown in week after first reports from Wuhan and again haranguing everyone to push for quarantining, showed me once and for all how passively disinterested and ovine the population are. With AI my wife got angry at what she considered to be a negative depressing attitude – so I just shut up about the bleak existential dangers for the sake of my relationships. I have young kids and see a high, going on overwhelming probability that their lives will be ended by AI but lacking a sufficiently large pool of equally worried (and politically powerful) believers or personal economic and personality toolkit to influence the masses there is nothing that I can do about it (Elon with all his reach demonstrates that clearly). So I am putting on those comfortable shoes and relaxing into hedonistic fatalism.
AI is coming, I don’t think there is any realistic means of stopping that given China and USA are in a cold war, and given PRC’s priorities and focus I judge it will most likely be in hands of China – a further existential risk given their demonstrated hegemonic Han supremacism.
The one thing that might help humanities chances of survival is to prevent large numbers of independent AIs – any one of which might wipe us out with some level of probability, where just one or two might not (summation of those anthropocidal probabilities being lowest with smallest number of independent AIs)
As an oblique aside; the recent reports on UFOs, with apparent non-zero probability that they are here watching us (explaining the Fermi paradox via the Zoo hypothesis), and seemingly making increasingly careful efforts to avoid being seen over several generations (historic reports are some of most compelling) paint an interesting and hopeful picture of the AI alignment problem – UFOs would most likely be von-neumann AI’s to bridge the gulfs of space and time and they clearly demonstrate an interest in our development and welfare. If they are here then I posit that human superintelligent AI would likely quickly see their revelation (and maybe that is the planned end point for passive alien observation), or maybe they are here to prevent bad AI neighbours from being developed – they could have created means to wipe out dangers eons ago.
Re: Robert
“they clearly demonstrate an interest in our development and welfare”
Clearly not the case – *humans* eradicated smallpox after ~2000 years. It wouldn’t destroy lives for hundreds of years in the presence of benevolent minds capable of interstellar travel.
Where it comes to the recent UAP craze, it certainly would be interesting if there were multi-modality multi-sensor detections of UFO performing maneuvers considered impossible with current tech but *are there indeed* such observations? I doubt it and therefore feel reluctant to invent explanations for phenomenon that (with subjectively overwhelming probability) does not exist.
https://www.lesswrong.com/s/zpCiuR4T343j9WkcK/p/5JDkW4MYXit2CquLs
Reading this post and comment made realize I’m not alone in feeling frustrated and isolated by the zombie-like disinterest people have in ai alignment.
I haven’t thought deeply about UFOs, but I have acquired a more definite sense of optimism that it’s possible to solve alignment and build friendly agi as a field before China and others do. The fact that there are so many fundamental questions yet to be answered about alignment and agency in all flavors gives me hope that the land of ideas is still fertile, it just need more farmers.
Let’s get to work and build a world of wonders for your kids and their kids also!
Where did you get the Han supremacism from? Was it from my Asian master race joke from years ago? lol
There is no missing mood, because the destruction of mankind by AI is not especially dire, compared to other dangers everyone faces. The probability of getting cancer in the next few decades, or your loved one getting cancer, is about the same magnitude, or even higher than AI-extinction. Think about the long months or even years of suffering pain, sickness, hopelessness and humiliating bodily decay. Compared to that, getting killed in an instant along with everyone else actually looks pretty good.
Of course, most people, when prompted, *profess* to care about the fate of mankind, but in the vast majority of cases that is a lie.
I guess most people only really start caring about the fate of mankind if the hero-module in their mind gets activated, and for this to happen they would need to believe that they could really make a difference.
Elon Musk cares about the fate of mankind while most people don’t, because he has really good reasons to believe he can make a difference.
The quotes are missing the Musk trichotomy (Apr 10 2019): symbiosis, irrelevance, or doom.
https://twitter.com/elonmusk/status/1116092380753436672