Luke Muehlhauser

New Stephen Hawking talk on the future of AI

May 12, 2015 by Luke 3 Comments

At Google Zeigeist, Hawking said:

Computers are likely to overtake humans in intelligence at some point in the next hundred years. When that happens, we will need to ensure that the computers have goals aligned with ours.

It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.

Artificial intelligence research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In the near term, world militaries are considering starting an arms race in autonomous-weapon systems that can choose and eliminate their own targets, while the U.N. is debating a treaty banning such weapons. Autonomous-weapon proponents usually forget to ask the most important question: What is the likely endpoint of an arms race, and is that desirable for the human race? Do we really want cheap AI weapons to become the Kalashnikovs of tomorrow, sold to criminals and terrorists on the black market? Given concerns about long-term controllability of ever-more-advanced AI systems, should we arm them, and turn over our defense to them? In 2010, computerized trading systems created the stock market “flash crash.” What would a computer-triggered crash look like in the defense arena? The best time to stop the autonomous-weapons arms race is now.

In the medium-term, AI may automate our jobs, to bring both great prosperity and equality.

Looking further ahead, there are no fundamental limits to what can be achieved. There is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains.

An explosive transition is possible, although it may play out differently than in the movies. As Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a singularity. One can imagine such technology out-smarting financial markets, out-inventing human researchers, out-manipulating human leaders, and potentially subduing us with weapons we cannot even understand.

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

In short, the advent of superintelligent AI would be either the best or the worst thing ever to happen to humanity, so we should plan ahead. If a superior alien civilization sent us a text message saying “We’ll arrive in a few decades,” would we just reply, “OK. Call us when you get here. We’ll leave the lights on.” Probably not, but this is more or less what has happened with AI.

Little serious research has been devoted to these issues, outside a few small nonprofit institutes. Fortunately, this is now changing. Technology pioneers Elon Musk, Bill Gates, and Steve Wozniak have echoed my concerns, and a healthy culture of risk assessment and awareness of societal implications is beginning to take root in the AI community. Many of the world’s leading AI researchers recently signed an open letter calling for the goal of AI to be redefined from simply creating raw, undirected intelligence to creating intelligence directed at benefiting humanity.

The Future of Life Institute, where I serve on the scientific advisory board, has just launched a global research program aimed at keeping AI beneficial.

When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technology such as nuclear weapons, synthetic biology, and strong artificial intelligence, we should instead plan ahead, and aim to get things right the first time, because it may be the only chance we will get.

I’m an optimist, and don’t believe in boundaries, neither for what we can do in our personal lives, nor for what life and intelligence can accomplish in our universe. This means that the brief history of intelligence that I have told you about is not the end of the story, but just the beginning of what I hope will be billions of years of life flourishing in the cosmos.

Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.

Filed Under: Quotes

Comments

  1. Joe says

    May 13, 2015 at 5:58 pm

    Isn’t all this AI talk just conceptually impossible? I understand that computer research is important but AI just seems silly.

    http://edwardfeser.blogspot.com/2015/02/accept-no-imitations.html?m=1

    http://edwardfeser.blogspot.com/2013/12/zombies-shoppers-guide.html?m=1

    Reply
    • Sam says

      May 14, 2015 at 1:33 pm

      No it really is alarmingly reasonable and believable. Greater minds than mine will have to decide whether the “few decades” is a reasonable estimate, but super intelligent AI is possible. Think about it this way; your brain came into existence through random changes over eons. It is completely believable that with an intelligent species trying to create that in the least random way possible, it could be done in the blink of an eye compared to an evolutionary timescale.

      Reply
  2. atleta says

    May 16, 2015 at 10:00 am

    There isn’t anything conceptually impossible about AI. There isn’t a natural constraint or law that we know about, that makes AI impossible. Nobody has proven it impossible, unlike, e.g. the perpetuum mobile. Nobody has proven it possible either, but if we drop the ‘A’ from AI, then we’re left with intelligence, and we have proof that human level intelligence exists. It’s us.

    And if it could have been created (by evolution), then it can be at least copied. So worst case, we should be able to create human level intelligence by modeling (copying) the human brain. But it very probably won’t be needed: the advances in AI research show that (not very surprisingly) we don’t have to make an exact copy.

    So even if it seems ‘silly’, it isn’t. It’s very real and close. Flying probably seemed silly for most people 2000 years ago, even though they saw that birds could fly. Then, around a hundred years ago it didn’t seem that silly anymore to a lot of people. However, for most people, it took someone to build an airplane and thus prove that it’s possible to stop believing that it’s not.

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Lists | Quotes | Musings

RSS | About | Other Writings

Modern classical music
Modern art jazz
Favorite movies since 2009
Animal consciousness
Industrial revolution

Recommended readings

Copyright © 2023 · Luke Muehlhauser on Genesis Framework · WordPress · Log in