Richard Clarke and R.P. Eddy recently published Warnings, a book in which they try to identify “those rare people who… have accurate visions of looming disasters.” The opening chapter explains the aims of the book:
…this book will seek to answer these questions: How can we detect a real Cassandra among the myriad of pundits? What methods, if any, can be employed to better identify and listen to these prophetic warnings? Is there perhaps a way to distill the direst predictions from the surrounding noise and focus our attention on them?
…As we proceeded through these Cassandra Event case studies in a variety of different fields, we began to notice common threads: characteristics of the Cassandras, of their audiences, and of the issues that, when applied to a modern controversial prediction of disaster, might suggest that we are seeing someone warning of a future Cassandra Event. By identifying those common elements and synthesizing them into a methodology, we create what we call our Cassandra Coefficient, a score that suggests to us the likelihood that an individual is indeed a Cassandra whose warning is likely accurate, but is at risk of being ignored.
Having established this process for developing a Cassandra Coefficient based on past Cassandra Events, we next listen for today’s Cassandras. Who now among us may be accurately warning us of something we are ignoring, perhaps at our own peril?
Below is a sampling of what they say in the chapter on risks from advanced AI systems. Note that I’m merely quoting from their take, not necessarily agreeing with it. (Indeed, there are significant parts I disagree with.)