Jeff Hawkins, inventor of the Palm Pilot, has since turned his attention to neuro-inspired AI. In response to Elon Musk’s and Stephen Hawking’s recent comments on long-term AI risk, Hawkins argued that AI risk worriers suffer from three misconceptions:
- Intelligent machines will be capable of [physical] self-replication.
- Intelligent machines will be like humans and have human-like desires.
- Machines that are smarter than humans will lead to an intelligence explosion.
If you’ve been following this topic for a while, you might notice that Hawkins seems to be responding to something other than the standard arguments (now collected in Nick Bostrom’s Superintelligence) that are the source of Musk et al.’s concerns. Maybe Hawkins is responding to AI concerns as they are presented in Hollywood movies? I don’t know.
First, the Bostrom-Yudkowsky school of concern is not premised on physical self-replication by AIs. Self-replication does seem likely in the long run, but that’s not where the risk comes from. (As such, Superintelligence barely mentions physical self-replication at all.)
Second, these standard Bostrom-Yudkowsky arguments specifically deny that AIs will have human-like psychologies or desires. Certainly, the risk is not premised on such an expectation.
Third, Hawkins doesn’t seem to understand the concept of intelligence explosion being used by Musk and others, as I explain below.