Economist Robin Hanson is among the most informed critics of the plausibility of what he calls a “local” intelligence explosion. He’s written on the topic many times before (most of it collected here), but here’s one more take from him on it, from Age of Em:
…some people foresee a rapid local “intelligence explosion” happening soon after a smart AI system can usefully modify its own mental architecture…
In a prototypical local explosion scenario, a single AI system with a supporting small team starts with resources that are tiny on a global scale. This team finds and then applies a big innovation in AI software architecture to its AI system, which allows this team plus AI combination to quickly find several related innovations. Together this innovation set allows this AI to quickly become more effective than the entire rest of the world put together at key tasks of theft or innovation.
That is, even though an entire world economy outside of this team, including other AIs, works to innovate, steal, and protect itself from theft, this one small AI team becomes vastly better at some combination of (1) stealing resources from others, and (2) innovating to make this AI “smarter,” in the sense of being better able to do a wide range of mental tasks given fixed resources. As a result of being better at these things, this AI quickly grows the resources that it controls and becomes more powerful than the entire rest of the world economy put together, and so it takes over the world. And all this happens within a space of days to months.
Advocates of this explosion scenario believe that there exists an as-yet-undiscovered but very powerful architectural innovation set for AI system design, a set that one team could find first and then keep secret from others for long enough. In support of this belief, advocates point out that humans (1) can do many mental tasks, (2) beat out other primates, (3) have a common IQ factor explaining correlated abilities across tasks, and (4) display many reasoning biases. Advocates also often assume that innovation is vastly underfunded today, that most economic progress comes from basic research progress produced by a few key geniuses, and that the modest wage gains that smarter people earn today vastly underestimate their productivity in key tasks of theft and AI innovation. In support, advocates often point to familiar myths of geniuses revolutionizing research areas and weapons.
Honestly, to me this local intelligence explosion scenario looks suspiciously like a super-villain comic book plot. A flash of insight by a lone genius lets him create a genius AI. Hidden in its super-villain research lab lair, this genius villain AI works out unprecedented revolutions in AI design, turns itself into a super-genius, which then invents super-weapons and takes over the world. Bwa-ha-ha.
Many arguments suggest that this scenario is unlikely (Hanson and Yudkowsky 2013). Specifically, (1) in 60 years of AI research high-level architecture has only mattered modestly for system performance, (2) new AI architecture proposals are increasingly rare, (3) algorithm progress seems driven by hardware progress (Grace 2013), (4) brains seem like ecosystems, bacteria, cities, and economies in being very complex systems where architecture matters less than a mass of capable detail, (5) human and primate brains seem to differ only modestly, (6) the human primate difference initially only allowed faster innovation, not better performance directly, (7) humans seem to have beat other primates mainly via culture sharing, which has a plausible threshold effect and so doesn’t need much brain difference, (8) humans are bad at most mental tasks irrelevant for our ancestors, (9) many human “biases” are useful adaptations to social complexity, (10) human brain structure and task performance suggest that many distinct modules contribute on each task, explaining a common IQ factor (Hampshire et al. 2012), (11) we expect very smart AI to still display many biases, (12) research today may be underfunded, but not vastly so (Alston et al. 2011; Ulku 2004), (13) most economic progress does not come from basic research, (14) most research progress does not come from a few geniuses, and (15) intelligence is not vastly more productive for research than for other tasks.
(And yes, the entire book is roughly this succinct and dense with ideas.)
I think there is a problem with how he makes his citations though. I understand the need to be succinct and not unfold the epistemic justification of every piece of evidence, but it grates to see such rapid fire strong assertions (especially of the ‘to be’ verb form) when reviewing the underlying citations does not present nearly as strong a picture.
Definitely agree!
I rather like the Tractatean density of this excerpt. When I read papers, I often wind up thinking “this could be condensed a lot”.
Robin’s conclusion in this case, which might be nutshelled “No AI Foom,” is likely right, but his style of argumentation leads him into what I consider some very shaky conclusions, for example the entire premise of Age of Em, that AI is unlikely in the coming century but brain transcription and emulation is likely.
From my forthcoming book: “A decade ago in Beyond AI, I took up cudgels against those who were predicting a major takeoff in artificial intelligence by virtue of a self-improving super-AI. Long before that happens, I said, we will see something a bit more mundane but perfectly effective: AI will start to work, and people will realize it, and lots of money, talent, and resources will pour into the field. “… it might affect AI like the Wright brothers’ Paris demonstrations of their flying machine did a century ago. After ignoring their successful first flight for years, the scientific community finally acknowledged it; and aviation went from a screwball hobby to the rage of the age, and kept that cachet for decades. In particular, the amount of development effort took off enormously.” That will produce an acceleration of results, which will attract more money, and there’s your feedback loop. The amount of money going into aviation before 1910 was essentially nil (Langley’s grants to the contrary notwithstanding). Once people caught on that airplanes really worked, though, there was a sensation and a boom. By the end of the 1920s, Pan American was flying scheduled international flights in the 8-passenger Ford Tri-motor. The ensuing exponential growth in capabilities, continuing unabated right up to the Sixties, was very much part of the zeitgeist of the “future we were promised.”
“Something of the kind appears to be happening in AI now.”
See also “The coming AI phase change” at my blog.