Richard Clarke and R.P. Eddy recently published Warnings, a book in which they try to identify “those rare people who… have accurate visions of looming disasters.” The opening chapter explains the aims of the book:
…this book will seek to answer these questions: How can we detect a real Cassandra among the myriad of pundits? What methods, if any, can be employed to better identify and listen to these prophetic warnings? Is there perhaps a way to distill the direst predictions from the surrounding noise and focus our attention on them?
…As we proceeded through these Cassandra Event case studies in a variety of different fields, we began to notice common threads: characteristics of the Cassandras, of their audiences, and of the issues that, when applied to a modern controversial prediction of disaster, might suggest that we are seeing someone warning of a future Cassandra Event. By identifying those common elements and synthesizing them into a methodology, we create what we call our Cassandra Coefficient, a score that suggests to us the likelihood that an individual is indeed a Cassandra whose warning is likely accurate, but is at risk of being ignored.
Having established this process for developing a Cassandra Coefficient based on past Cassandra Events, we next listen for today’s Cassandras. Who now among us may be accurately warning us of something we are ignoring, perhaps at our own peril?
Of the risks covered in the book, Clarke says he’s most worried about sea level rise, and Eddy says he’s most worried about superintelligence.
Below is a sampling of what they say in the chapter on risks from advanced AI systems. Note that I’m merely quoting from their take, not necessarily agreeing with it. (Indeed, there are significant parts I disagree with.)
Superintelligence is an artificial intelligence that will be “smarter” than its human creators in all the metrics we define as intelligence. Superintelligence does not yet exist, but when it does, some believe it could solve every major problem of humankind: aging and disease, energy and food shortages, climate change. Self-perpetuating and untiring, this type of AI will improve at a remarkably fast rate, and eventually surpass the level of complexity humans can understand. This is the promise of AI, and possibly its peril.
Experts have given a name to this era of the hyperintelligent computer: the “intelligence explosion.” Nearly every computer and neural scientist with expertise in the field believes that the intelligence explosion will happen in the next seventy years; most predict it will happen by 2040… As science fiction writer and computer scientist Vernor Vinge wrote, “The best answer to the question, ‘Will computers ever be as smart as humans?’ is probably ‘Yes, but only briefly.’”
As the excitement grows, so too does fear. The astrophysicist and Nobel laureate Dr. Stephen Hawking warns that AI is “likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right.” Hawking is not alone in his concern about superintelligence. Icons of the tech revolution, including former Microsoft chairman Bill Gates, Amazon founder Jeff Bezos, and Tesla and SpaceX CEO Elon Musk, echo his concern. And it terrifies Eliezer Yudkowsky.
Eliezer has dedicated his life to preventing artificial intelligence from destroying humankind… His work focuses on foundational mathematical research to ensure (he hopes) that artificial intelligence ultimately has only a positive impact on humanity. The ultimate problem: how to keep humanity from losing control of a machine of its own creation, to prevent artificial intelligence from becoming, in the words of James Barrat in the title of his 2013 book, Our Final Invention.
A divisive figure, Yudkowsky is well known in academic circles and the Silicon Valley scene as the coiner of the term “friendly AI.” His thesis is simple, though his solution is not: if we are to have any hope against superintelligence, we need to code it properly from the beginning. The answer, Eliezer believes, is one of morality. AI must be programmed with a set of ethical codes that align with humanity’s. Though it is his life’s only work, Yudkowsky is pretty sure he will fail. Humanity, he tells us, is likely doomed.
…
From Yudkowsky’s whimsical graph above, we can get a hint of the predicted power of superintelligence (here called “recursively self-improved AI”)…
Elon Musk calls creating artificial intelligence “summoning the demon” and thinks it’s humanity’s “biggest existential threat.” When we asked Eliezer what was at stake, his answer was simple: everything. Superintelligence gone wrong is a species-level threat, a human extinction event.
Humans are neither the fastest nor the strongest creatures on the planet but dominate for one reason: humans are the smartest. How might the balance of power shift if AI becomes superintelligence? Yudkowsky told us, “By the time it’s starting to look like [an AI system] might be smarter than you, the stuff that is way smarter than you is not very far away.” He believes “this is crunch time for the whole human species, and not just for us but for the [future] intergalactic civilization whose existence depends on us. This is the hour before the final exam and we’re trying to get as much studying done as possible.” It is not personal. “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”
…
Yudkowsky believes superintelligence must be designed from the start with something approximating ethics. He envisions this as a system of checks and balances so that advanced AI growth is auditable and controllable, so that even as it continues to learn, advance, and reprogram itself, it will not evolve out of its own benign coding. Such preprogrammed measures will ensure that superintelligence will “behave as we intend even in the absence of immediate human supervision.” Eliezer calls this “friendly AI.”
According to Yudkowsky, once AI gains the ability to broadly reprogram itself, it will be far too late to implement safeguards, so society needs to prepare now for the intelligence explosion. Yet this preparation is complicated by the sporadic and unpredictable nature of scientific advancement and the numerous secret efforts to create superintelligence around the world. No supranational organization can track all of the efforts, much less predict when or which one of them will succeed.
Eli and his supporters believe a “wait and see” approach (a form of satisficing) is a Kevorkian prescription. “[The birth of superintelligence] could be five years out; it could be forty years out; it could be sixty years out,” Yudkowsky told us. “You don’t know. I don’t know. Nobody on the planet knows. And by the time you actually know, it’s going to be [too late] to do anything about it.”
…
Yudkowsky rejects the idea that a superintelligence should, or could, be tailored to parochial national security interests, believing instead that any solution must be considered at the human species level. “This stuff does not stop being lethal because it’s in American hands, or Australian hands, or even in Finland’s hands,” he told us, mildly annoyed…
…
How concerned should we really be about this threat?
Yudkowsky’s warning is certainly affected by Initial Occurrence Syndrome. Never before has our species encountered something of greater intelligence, so we are ill equipped to believe, let alone foresee and plan for, the complete scope of the problem. At the same time, Complexity Mismatch suggests that decision makers may not be able or willing to digest and distill the issue and its possible solutions. Moreover, the issue certainly seems outlandish to many, the stuff of sci-fi movies (but likely won’t in even ten short years). Killer robots? Machines taking over the world? Does the seemingly fantastical nature of superintelligence result in a dangerous and dismissive bias against the issue? Certainly it has.
The audience for Eli’s warning suffers from diffusion of responsibility. There is no person or office in the U.S. government or any international organization who is responsible for saving the world from superintelligence gone rogue. Yudkowsky’s concern is so novel and perhaps so widely dismissed that there is not even an accepted scientific convention on AI safety protocols. A legitimate attempt at regulation would have to be done multilaterally, at the highest levels of every industrialized government (likely via UN treaty) and would require intrusive and likely surprise inspections of both government and commercial labs.
Yudkowsky is a data-driven expert who has devoted his life to preventing this disaster. He exhibits all of the Cassandra characteristics and is missing the Cassandra weakness of low social power. Even so, his singular focus and his adherence to alternative social norms has alienated others who might serve as advocates, even those who might play a role in a solution. Is our Cassandra’s off-putting personality an insurmountable obstacle?
Finally, many of Eliezer Yudkowsky’s critics have vested interests against regulations or other efforts to slow the development of AI. From those in industry to those in government, superintelligence holds the promise of an unmatched competitive edge. Can they continue to be objective parties to the discussion, or have they grown biased against the threat?
Yudkowsky’s suggested solution, a global Manhattan Project to develop safe AI, would be one of the most incredibly complicated multilateral bureaucratic solutions we could imagine. An alliance of the United States and select allies may be a more reasonable solution, but nothing of this sort has even been considered. We have been warned that we have one chance, and one chance only, to get it right. Perhaps we should pay attention to Eliezer Yudkowsky before we open a door we can never again close.
Hi Luke,
We have never met yet I have been researching AI and The Future of Artificial Intelligence.
I came across your name when looking into Machine Intelligence Research Institute.
I liked that your involved with Open Philanthropy Project and AI.
I was wondering if you wrote https://www.openphilanthropy.org/reasoning-transparency#Open_with_a_linked_summary_of_key_takeaways ?
I also wanted to ask if you would be willing to read and review a research paper that I wrote on The Future of Artificial Intelligence. I have collected data from many sources with quotes from
Stephen Hawkins, His Holiness Dalai Lama, Dmitry Itskov, and many leaders in the field.
It starts as follows:
While attending the 33rd Kalachakra Initiation by His Holiness the XIV Dalai Lama
in Ledakh India (July 3-14, 2014) I had a dream that the Dalai Lama
made an announcement that he would reincarnated as Artificial Intelligence.
So I started to research the Dream further and to
put forth the idea to our global community.
Quoted from the book, Gentle Bridges: Conversations with the Dalai Lama on the Science of Mind
(by Jeremy Hayward and Francisco Varela 2001),
the Dalai Lama states that “there is a possibility that a scientist in the next life…
could be reborn in a computer half-human and half-machine reincarnation.”
I believe this paper to hold some keys to what is needed to create the beautiful world that we all want to see.
You can e mail me if you are interested.
Thanks in Advance,
Glenn-Philip
I have known Eliezer for a long time, Eric Drexler even longer and saw Vernor Vinge a couple of weeks ago.
It is so hard to write about the future as outlined by these people that I resorted to fiction. The story is called “The Clinic Seed.” The story was first posted on Eliezer’s Shock Level 4 list. It is located here http://www.terasemjournals.org/GNJournal/GN0202/henson1.html if you want to read it.
It is my opinion that the side effects of the singularity will eliminate biological humanity entirely, but a story must have characters to be a story. So some of it is written from the viewpoint of individuals in a small remnant group.
The point of the story (to the extent it has one) is that even friendly AI can be expected to have effects we might not want, but can’t avoid.
Hi Luke,
It has been a while.
I just ran across this piece again.
I feel like I never got the debrief from you on what you disagree with in our AI chapter. I certainly learned a ton from you about AI, so I’d love to hear more.
Let me know next time you are around NYC.
Take care and thanks for everything,
R.P.