Nicely put, FHI

Re-reading Ross Andersen’s piece on Nick Bostrom and FHI for Aeon magazine, I was struck by several nicely succinct explanations given by FHI researchers — ones which I’ll borrowing for my own conversations with people about these topics:

“There is a concern that civilisations might need a certain amount of easily accessible energy to ramp up,” Bostrom told me. “By racing through Earth’s hydrocarbons, we might be depleting our planet’s civilisation startup-kit. But, even if it took us 100,000 years to bounce back, that would be a brief pause on cosmic time scales.”

“Human brains are really good at the kinds of cognition you need to run around the savannah throwing spears,” Dewey told me. “But we’re terrible at [many other things]… Think about how long it took humans to arrive at the idea of natural selection. The ancient Greeks had everything they needed to figure it out. They had heritability, limited resources, reproduction and death. But it took thousands of years for someone to put it together. If you had a machine that was designed specifically to make inferences about the world, instead of a machine like the human brain, you could make discoveries like that much faster.”

“The difference in intelligence between humans and chimpanzees is tiny,” [Armstrong] said. “But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list. That tells us it’s possible for a relatively small intelligence advantage to quickly compound and become decisive.”

“The basic problem is that the strong realisation of most motivations is incompatible with human existence,” Dewey told me. “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.”

[Bostrom] told me that when he was younger, he was more interested in the traditional philosophical questions… “But then there was this transition, where it gradually dawned on me that not all philosophical questions are equally urgent,” he said. “Some of them have been with us for thousands of years. It’s unlikely that we are going to make serious progress on them in the next ten. That realisation refocused me on research that can make a difference right now. It helped me to understand that philosophy has a time limit.”


  1. zac says

    I really like ‘building giant computers’ as a possible path from unfriendly AI to human extinction. People seem to get hung up on the ‘paperclip’ part of paperclip-maximizes, while ‘building giant computers’ easily segues into terminal vs instrumental values/goals, Omohundro’s universal AI drives, and other more productive conversations.

Leave a Reply

Your email address will not be published. Required fields are marked *