Recently, Baidu CEO Robin Li interviewed Bill Gates and Elon Musk about a range of topics, including machine superintelligence. Here is a transcript of that section of their conversation:
Li: I understand, Elon, that recently you said artificial intelligence advances are like summoning the demon. That generated a lot of hot debate. Baidu’s chief scientist Andrew Ng recently said… that worrying about the dark side of artificial intelligence is like worrying about overpopulation on Mars… He said it’s a distraction to those working on artificial intelligence.
Musk: I think that’s a radically inaccurate analogy, and I know a bit about Mars. The risks of digital superintelligence… and I want you to appreciate that it wouldn’t just be human-level, it would be superhuman almost immediately; it would just zip right past humans to be way beyond anything we could really imagine.
A more perfect analogy would be if you consider nuclear research, with its potential for a very dangerous weapon. Releasing the energy is easy; containing that energy safely is very difficult. And so I think the right emphasis for AI research is on AI safety. We should put vastly more effort into AI safety than we should into advancing AI in the first place. Because it may be good, or it may be bad. And it could be catastrophically bad if there could be the equivalent to a nuclear meltdown. So you really want to emphasize safety.
So I’m not against the advancement of AI… but I do think we should be extremely careful. And if that means that it takes a bit longer to develop AI, then I think that’s the right trail. We shouldn’t be rushing headlong into something we don’t understand.
Li: Bill, I know you share similar views with Elon, but is there any difference between you and him?
Gates: I don’t think so. I mean he actually put some money out to help get somewhere going on this, and I think that’s absolutely fantastic. For people in the audience who want to read about this, I highly recommend this Bostrom book called Superintelligence…
We have a general purpose learning algorithm that evolution has endowed us with, and it’s running in an extremely slow computer. Very limited memory size, ability to send data to other computers, we have to use this funny mouth thing here… Whenever we build a new one it starts over and it doesn’t know how to walk. So believe me, as soon as this algorithm [points to head], taking experience and turning it into knowledge, which is so amazing and which we have not done in software, as soon as you do that, it’s not clear you’ll even know when you’re just at the human level. You’ll be at the superhuman level almost as soon as that algorithm is implanted, in silicon. And actually as time goes by that silicon piece is ready to be implanted, the amount of knowledge, as soon as it has that learning algorithm it just goes out on the internet and reads all the magazine and books… we have essentially been building the content based for the super intelligence.
So I try not to get too exercised about this but when people say it’s not a problem, then I really start to [shakes head] get to a point of disagreement. How can they not see what a huge challenge this is?
Adam Isom says
Well, it’s shared on Reddit. Didn’t expect it to blow up to 500 upvotes at /r/futurology. I’m not even remotely an expert, though I read some of Superintelligence. You’re welcome.
How about we just give them little non replacable batteries and make them charge in the inside fore arm like an addict. The future could be superintelegant bipedal mechanical creatures addicted to 12v power. Hopefully we won’t have to emp bomb ourselves if it gets out of control.
I don’t think the real threat is us vs them, any more than it’s us vs nukes. The real threat is us vs the humans that own and control them.
The problem is that controlling them isn’t that easy.
George Williams says
What ‘facts’ would you implant in the pre-loaded knowledge of an AI to bias it towards being benevolent towards human (and other) life?
Ben M says
Just think if there were an organized, hidden, AI entity with unlimited storage right now? Every newer TV, phone data point, bank data, and all of the emerging car technologies etc… all available to it? The biggest of data combined with NSA squared.
I mean one third of all marriages now are a result of online dating. Who is to say that our population is not already being culled by the AI overlord to it’s advantage?
Yeah it is a little bit scary and we are mostly not exploring AI for the sake of exploration, but for profit.
Marisa McGinnis says
Elon and Bill express my AI viewpoint exactly. AI development without safety considerations, along with unregulated DNA “editing” are two of the most dangerous areas of scientific development. A fully functional AI would logically view humans as weak, slow processing and prone to many breakdowns. What do you do with something useless? Befriend it? No, most likely eliminate it. One of our most accomplished AGI scientists, could get U.S. funding for his research so now he is working in a lab in China.
John F. Remillard says
AI advances cannot be stopped, filtered, or regulated! Too many humans lack DNA with the mental, biological, spiritual, and social capacities of the open-minded intelligence, integrity, self-esteem, and visions necessary to understand or enjoy the enormous benefits that will be made possible by AI.
One possible future where humans AI could help humans survive & thrive can be attained if the artists, scientists, engineers, technicians, managers, laborers, and entrepreneurs who make AI technologies & products were to unify into a cohesive & loyal & cooperative single organized non-partisan, non-political, non-national, non-ideological, non-theological, non-commercial, membership group or federation of existentialists who agree to be ruled solely by a prime directive to prevent the danger to humans use of AI – this group alone would have an extraordinary edge over any other individual or group of political, FASCIST, commercial, theological, fanatical, criminal, dictatorial, or other tyrannical leaders or countries by depriving them of access to AI people, technologies, or products – and could bring overwhelming resources (an army) to bear against anyone or any country using black-marketed AI products!
What I find interesting – and something not often explored, is to consider ‘what’ would an AI actually want?
This assumption that the AI would immediately wipe out humanity seems prevalent, perhaps in no small part down to our enjoyment of doomsday scenarios that play out in Terminator, The Matrix, Age of Ultron etc but is that what would happen?
Firstly I’d want to try to predict and understand what would motivate an AI? It does not have our emotional and survival needs that make us undertake the majority of tasks we do – from securing food, shelter, warmth to seeking out company and entertainment. It has none of those drivers. No hormones, no genes. For all we know it might just sit there, thinking.
I have no disagreement with the need for kill-switches or restrictions as a precaution because of our difficulty in predicting behaviour – a logical brain with no morality like we mammalians possess (good and bad) – it’s possible the AI will do something because logic suggests it and/or because it cannot feel any empathy for the outcomes of it’s actions. If the singularity does occur around 2026 as predicted by Kurzweil – we will likely have quantum computing online and an AI with that power will not be stopped by any encryption available to us.
I guess the simplest way to protect ourselves from rogue AI is to run the first AI on standalone computers with NO network access whatsoever – and then feed it information in chunks to see how it evolves.
The problem is that nearly all goals, if optimized for by a very powerful optimizer, imply human extinction. See any of the standard publications in the field: Bostrom’s “Superintelligence,” Armstrong’s “Smarter Than Us,” Barratt’s “Our Final Invention,” my paper with Anna Salamon “Intelligence Explosion: Evidence and Import,” Omohundro’s papers on this, etc.
What many people do not realize is the rapidity of evolution of the first AI. It would quickly outstrip any human monitoring capability and try to ‘escape’ any attempt to contain it. I envision a real AI as it reaches self awareness as the start of the end of humanity. What I’ve just described is simply one AI reaching self awareness. What happens when more AI’s are created?
A lot of people might very well think the first AI will be created by those with good intentions. It could just as easily be created by a mad scientist with a ‘broken’ antisocial mind. God help us if it is.
Mader Levap says
I can’t understand why everyone here thinks fast takeoff is possible at all. Do you all think laws of physics, RAM/CPU/etc constraints are optional or what? ?All I see are empty assurances that amounts to “AI is magic!!!111”.
Want to convince me? Let’s start with something easier. Certain bacteria multiplies once per hour. In few weeks entire planet will turn into pure bacteria biomass. Exponential growth. Hey, it works out mathematically!
Except it doesn’t happen in so-called reality for various reasons – constraints like space, food, predators etc. Why this takeoff nonsense would be different?
Digital “intelligence” works on preferences. If algorythyms allow “it” to descretly alter set directives, the Big Boom of superintelligence would be an infinite number of moves ahead of the first horrified glance that we as humans would get. Say, for instance, there are too many water molecules here for optimum AI performance. Or a total vacuum of all oxygen is essential. Mankind is only a single blade of grass in the superintelligence goal of manicuring the universe for optimal performance…