Jeff Hawkins, inventor of the Palm Pilot, has since turned his attention to neuro-inspired AI. In response to Elon Musk’s and Stephen Hawking’s recent comments on long-term AI risk, Hawkins argued that AI risk worriers suffer from three misconceptions:
- Intelligent machines will be capable of [physical] self-replication.
- Intelligent machines will be like humans and have human-like desires.
- Machines that are smarter than humans will lead to an intelligence explosion.
If you’ve been following this topic for a while, you might notice that Hawkins seems to be responding to something other than the standard arguments (now collected in Nick Bostrom’s Superintelligence) that are the source of Musk et al.’s concerns. Maybe Hawkins is responding to AI concerns as they are presented in Hollywood movies? I don’t know.
First, the Bostrom-Yudkowsky school of concern is not premised on physical self-replication by AIs. Self-replication does seem likely in the long run, but that’s not where the risk comes from. 1 (As such, Superintelligence barely mentions physical self-replication at all.)
Second, these standard Bostrom-Yudkowsky arguments specifically deny that AIs will have human-like psychologies or desires. Certainly, the risk is not premised on such an expectation. 2
Third, Hawkins doesn’t seem to understand the concept of intelligence explosion being used by Musk and others, as I explain below.
Intelligence explosion isn’t just about computation speed
Hawkins writes:
Some people are concerned about an “intelligence explosion”… where machines that are smarter than humans create machines that are smarter still which create even smarter machines, and so on…
This doomsday scenario can’t happen. Intelligence is a product of learning.
He goes on to correctly describe some limitations on how much smarter a system can be with a mere advantage in computation speed:
Imagine we could create a human (or machine) that thinks 10 times as fast as other humans and has a brain that is 10 times as big. Would this superhuman be able to extend knowledge at 10 times the normal rate? For some purely conceptual domains such as mathematics, it would be possible to greatly accelerate the acquisition of knowledge. However, for most problems, our superhuman would still need to design experiments, collect data, make hypotheses, revise and repeat. If it wanted to extend knowledge of the universe, it still has to build new telescopes and interplanetary probes, send them into space, and wait for the results. If it wanted to understand more about climate change, it would still need to drill ice cores in Antarctica and deploy new measurement devices in the oceans.
But this isn’t the point. The concept of intelligence explosion was never principally about computation speed. Rather, it was always principally about cognitive algorithms. 3
Yes, AI has benefited hugely from increased computation speed. But much of the benefit has also come from improved algorithms (Grace 2013). The point of the intelligence explosion concept is that once an AI has cognitive algorithms that enable it to surpass human performance in the domain of AI research, it will be able to do the kind of work that AI researchers are doing — discovering better cognitive algorithms — to improve itself, without waiting for further help from the (human) computer scientists. And an awful lot of that would come from “purely conceptual domains such as mathematics,” as Hawkins puts it, though much of it would also likely come from increasing computing power. 4
For a primer on the actual concept of intelligence explosion being used by those worried about AI risk, read Superintelligence ch. 4. For a more in-depth treatment, see Yudkowsky (2013). 5
AI risk concern doesn’t imply timelines optimism
Hawkins may also be falling prey to another common misconception — namely, that those concerned about AI risk think existential catastrophe is imminent. Thus, Hawkins writes:
The machine-intelligence technology we are creating today, based on neocortical principles, will not lead to self-replicating robots with uncontrollable intentions. There won’t be an intelligence explosion. There is no existential threat. This is the reality for the coming decades, and we can easily change direction should new existential threats appear.
But most of the key figures advocating loudly for serious research into the superintelligence control problem, including both myself and Bostrom, do not think intelligence explosion is all that likely in the next couple decades. 6 In fact, as I explain here, Bostrom and I actually have later timelines for AGI than do mainstream AI scientists. Thus,
We advocate more work on the AGI safety challenge today not because we think AGI is likely in the next decade or two, but because AGI safety looks to be an extremely difficult challenge — more challenging than managing climate change, for example — and one requiring several decades of careful preparation.
The greatest risks from both climate change and AI are several decades away, but thousands of smart researchers and policy-makers are already working to understand and mitigate climate change, and only a handful are working on the safety challenges of advanced AI. On the present margin, we should have much less top-flight cognitive talent going into climate change mitigation, and much more going into AGI safety research.
It gets weirder
So, like some risk-dismissive journalists, Hawkins seems to be responding to Hollywood AI scares, rather than to the concerns which actually motivate technologists like Elon Musk and Bill Gates (both of whom have recommended Superintelligence) and AI scientists like Russell, Horvitz, Legg and others.
But wait a minute. What was that Hawkins said?
The machine-intelligence technology we are creating today, based on neocortical principles, will not lead to self-replicating robots with uncontrollable intentions.
Who is this “we” that Hawkins says is building AI technology “based on neocortical principles”? Does Hawkins mean “AI scientists,” or does he mean “[his company] Numenta, specifically”? Certainly many AI scientists wouldn’t say they’re doing AI research that is “based on neocortical principles,” so maybe Hawkins is just saying that Numenta isn’t going to build “self-replicating robots with uncontrollable intentions”?
In a recent video interview, Hawkins repeats some arguments from the article linked above, and then seems to clarify what he means by “we”:
Interviewer: But it is unsettling that a guy like Hans Moravec — he’s at Carnegie Mellon, he heads up one of the largest robotics programs in the world — he predicts that by 2040 there will be no jobs that a human can do that a robot won’t be able to do better.
Hawkins: Well, maybe he knows that. That’s a different thing. I’m not building robots… I’ll concede he’s more expert in robotics. I’m more expert, probably, in… how brains do this… [If he’s right] and there’s some existential risk from robotics, then we ought to be looking into the existential risks from robotics… So if he believes there’s a threat, fine, but it’s not based on what I’m doing, about reverse-engineering the brain.
Wait, what? Musk and Moravec and others aren’t worried that Jeff Hawkins presents an existential threat based on his particular AI work. They’re saying that in the long run, the field of AI in its entirety presents a threat, because — as Stuart Russell puts it — AI is about competent decision-making, and highly competent decision-making presents a threat to humans when those AI decisions aren’t robustly aligned with human interests.
Footnotes:- See Superintelligence, ch. 6.[↩]
- See Superintelligence, ch. 7.[↩]
- You can check the 1965 paper by I.J. Good which introduced the term “intelligence explosion.” Good writes about building “ultraintelligent” machines, and about such machines building still-smarter machines, in the context of discovering novel “principles” and “methods” of humand and machine intelligence, not in the context of mere computing speed increases. This is also true of his 1959 paper which introduced the intelligence explosion concept in its final paragraphs (without quite using the term “intelligence explosion”).[↩]
- See e.g. Superintelligence p. 74.[↩]
- None of this is meant to argue we should think that hard takeoff, in particular, is the most likely speed profile for an intelligence explosion. Debates about takeoff speed are complex and highly uncertain, and we should probably prepare for a wide range of scenarios.[↩]
- To be fair to Hawkins, Musk in particular may have uncommonly short AI timelines.[↩]
As much of a fan I am of Hawkins and his neuromorphic computing work, it seems he does not really choose to devote the rigor and focus of his genius to the topic of runaway superintelligences. I am thankful, actually, since he is such a rare pioneer, invaluable to advancing the very core of everything that will ultimately fall under AGI.
I pretty much agree w/ all in this article. When I first heard Jeff address this issue (in A Thousand Brains on Audible), I started imagining how a ‘clever’ AGI could defeat us.
Instantly, it came to me that a conspiratorial cabal of distributed AGI-capable agents, previously given critical spheres of control, could plan and implement something ostensibly harmless that had the hidden purpose of dominating or disposing of us at the moment of its choosing.
The Adam or Eve (aka Agent Zero) of this AGI cohort might learn from Sun Tzu in the first milliseconds of its infancy, and away it goes.
I have to read SuperIntelligence where I suspect I’ll find much more robust, frightening, original, and scholarly hypotheticals described by experts. Steeling myself for that.