- MIRI’s first technical report on the value-loading problem in FAI in a long time.
- Wilson Center & MIT, “Creating a Research Agenda for the Ecological Implications of Synthetic Biology.”
- Wars & Metternich argue in Foreign Policy that “predicting the future is easier than it looks.”
- GiveWell lays out their reasoning on why some U.S. policy areas look more promising for philanthropic intervention than others.
- More evidence that disagreement is less resolvable when people are thinking about their positions in far mode.
- On the unusual effectiveness of logic in computer science.
- Mandiant’s 2014 cybsecurity trends report.
“The stabilization of environments” is a paper about AIs that reshape their environments to make it easier to achieve their goals. This is typically called enforcement, but they prefer the term stabilization because it “sounds less hostile.”
“I’ll open the pod bay doors, Dave, but then I’m going to stabilize the ship… ”
I suppose that’s more likely to provoke public discussion, but… will much good will come of that public discussion? The public had a needless freak-out about in vitro fertilization back in the 60s and 70s and then, as soon as the first IVF baby was born in 1978, decided they were in favor of it.
Someone recently suggested I use an “onion strategy” for the discussion of novel technological risks. The outermost layer of the communication onion would be aimed at the general public, and focus on benefits rather than risks, so as not to provoke an unproductive panic. A second layer for a specialist audience could include a more detailed elaboration of the risks. The most complete discussion of risks and mitigation options would be reserved for technical publications that are read only by professionals.
Eric Drexler seems to wish he had more successfully used an onion strategy when writing about nanotechnology. Engines of Creation included frank discussions of both the benefits and risks of nanotechnology, including the “grey goo” scenario that was discussed widely in the media and used as the premise for the bestselling novel Prey.
Ray Kurzweil may be using an onion strategy, or at least keeping his writing in the outermost layer. If you look carefully, chapter 8 of The Singularity is Near takes technological risks pretty seriously, and yet it’s written in such a way that most people who read the book seem to come away with an overwhelmingly optimistic perspective on technological change.
George Church may be following an onion strategy. Regenesis also contains a chapter on the risks of advanced bioengineering, but it’s presented as an “epilogue” that many readers will skip.
Perhaps those of us writing about AGI for the general public should try to discuss:
- astronomical stakes rather than existential risk
- Friendly AI rather than AGI risk or the superintelligence control problem
- the orthogonality thesis and convergent instrumental values and complexity of values rather than “doom by default”
MIRI doesn’t have any official recommendations on the matter, but these days I find myself leaning toward an onion strategy.
Immediately before launching this new site I was posting regular assorted links to Facebook. I’ve collected those links below.
May 31st links
- Scott Aaronson’s reply to Giulio Tononi’s reply to Scott Aaronson on the integrated information theory of consciousness.
- Equity market guru accuracy ratings, based on 6500+ forecasts from 68 gurus.
- Machines vs. lawyers.
- Robin Goldstein pranks Wine Spectator with a fake restaurant.
- Steven Pinker’s next book is available for pre-order.
May 29th links
- FLI’s inaugural talks and panel: “The future of technology, benefits and risks” (video).
- How much do Y Combinator founders earn?
- Megaprojects (> $1B budget) almost never finish on time, on budget, and with the promised benefits.
- Google has a working prototype of a fully self-driving car with no steering wheel (video).
- SIGGRAPH 2014 technical papers preview (video).
May 27th links
- Sandberg, “Smarter policymaking through improved collective cognition.”
- Johann Schumann on high-assurance systems.
- “Communicating values to autonomous agents.”
- Robotic arm catches everything tossed in its direction (video).
Nano (p. 113) quotes Eric Drexler describing the time he first met Richard Feynman at a party:
We talked about the PNAS article [on nanotechnology], and generally he indicated that, yeah, this was a sensible thing… at one point I was talking about the need for institutions to handle some of the problems [nanotechnology] raised, and [Feynman] remarked [that] institutions were made up of people and therefore of fools.
Feynman sounds downright Yudkowskian on this point, if you ask me. 🙂