Expertise vs. intelligence and rationality

When you’re not sure what to think about something, or what to do in a certain situation, do you instinctively turn to a successful domain expert, or to someone you know who seems generally very smart?

I think most people don’t respect individual differences in intelligence and rationality enough. But some people in my local community tend to exhibit the opposite failure mode. They put too much weight on a person’s signals of explicit rationality (“Are they Bayesian?”), and place too little weight on domain expertise (and the domain-specific tacit rationality that often comes with it).

This comes up pretty often during my work for MIRI. We’re considering how to communicate effectively with academics, or how to win grants, or how to build a team of researchers, and some people (not necessarily MIRI staff) will tend to lean heavily on the opinions of the most generally smart people they know, even though those smart people have no demonstrated expertise or success on the issue being considered. In contrast, I usually collect the opinions of some smart people I know, and then mostly just do what people with a long track record of success on the issue say to do. And that dumb heuristic seems to work pretty well.

Yes, there are nuanced judgment calls I have to make about who has expertise on what, exactly, and whether MIRI’s situation is sufficiently analogous for the expert’s advice to work at MIRI. And I must be careful to distinguish credentials-expertise from success-expertise (aka RSPRT-expertise). And this process doesn’t work for decisions on which there are no success-experts, like long-term AI forecasting. But I think it’s easier for smart people to overestimate their ability to model problems outside their domains of expertise, and easier to underestimate all the subtle things domain experts know, than vice-versa.

Will AGI surprise the world?

Yudkowsky writes:

In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in which you reason, “After event W happens, everyone will see the truth of proposition X, leading them to endorse Y and agree with me about policy decision Z.”

Example 2: “As AI gets more sophisticated, everyone will realize that real AI is on the way and then they’ll start taking Friendly AI development seriously.”

Alternative projection: As AI gets more sophisticated, the rest of society can’t see any difference between the latest breakthrough reported in a press release and that business earlier with Watson beating Ken Jennings or Deep Blue beating Kasparov; it seems like the same sort of press release to them. The same people who were talking about robot overlords earlier continue to talk about robot overlords. The same people who were talking about human irreproducibility continue to talk about human specialness. Concern is expressed over technological unemployment the same as today or Keynes in 1930, and this is used to fuel someone’s previous ideological commitment to a basic income guarantee, inequality reduction, or whatever. The same tiny segment of unusually consequentialist people are concerned about Friendly AI as before. If anyone in the science community does start thinking that superintelligent AI is on the way, they exhibit the same distribution of performance as modern scientists who think it’s on the way, e.g. Hugo de Garis, Ben Goertzel, etc.

My own projection goes more like this:

As AI gets more sophisticated, and as more prestigious AI scientists begin to publicly acknowledge that AI is plausibly only 2-6 decades away, policy-makers and research funders will begin to respond to the AGI safety challenge, just like they began to respond to CFC damages in the late 70s, to global warming in the late 80s, and to synbio developments in the 2010s. As for society at large, I dunno. They’ll think all kinds of random stuff for random reasons, and in some cases this will seriously impede effective policy, as it does in the USA for science education and immigration reform. Because AGI lends itself to arms races and is harder to handle adequately than global warming or nuclear security are, policy-makers and industry leaders will generally know AGI is coming but be unable to fund the needed efforts and coordinate effectively enough to ensure good outcomes.

At least one clear difference between my projection and Yudkowsky’s is that I expect AI-expert performance on the problem to improve substantially as a greater fraction of elite AI scientists begin to think about the issue in Near mode rather than Far mode.

As a friend of mine suggested recently, current elite awareness of the AGI safety challenge is roughly where elite awareness of the global warming challenge was in the early 80s. Except, I expect elite acknowledgement of the AGI safety challenge to spread more slowly than it did for global warming or nuclear security, because AGI is tougher to forecast in general, and involves trickier philosophical nuances. (Nobody was ever tempted to say, “But as the nuclear chain reaction grows in power, it will necessarily become more moral!”)

Still, there is a worryingly non-negligible chance that AGI explodes “out of nowhere.” Sometimes important theorems are proved suddenly after decades of failed attempts by other mathematicians, and sometimes a computational procedure is sped up by 20 orders of magnitude with a single breakthrough.

Some alternatives to “Friendly AI”

What does MIRI’s research program study?

The most established term for this was coined by MIRI founder Eliezer Yudkowsky: “Friendly AI.” The term has some advantages, but it might suggest that MIRI is trying to build C-3PO, and it sounds a bit whimsical for a serious research program.

What about safe AGI or AGI safety? These terms are probably easier to interpret than Friendly AI. Also, people like being safe, and governments like saying they’re funding initiatives to keep the public safe.

A friend of mine worries that these terms could provoke a defensive response (in AI researchers) of “Oh, so you think me and everybody else in AI is working on unsafe AI?” But I’ve never actually heard that response to “AGI safety” in the wild, and AI safety researchers regularly discuss “software system safety” and “AI safety” and “agent safety” and more specific topics like “safe reinforcement learning” without provoking negative reactions from people doing regular AI research.

I’m more worried that a term like “safe AGI” could provoke a response of “So you’re trying to make sure that a system which is smarter than humans, and able to operate in arbitrary real-world environments, and able to invent new technologies to achieve its goals, will be safe? Let me save you some time and tell you right now that’s impossible. Your research program is a pipe dream.”

My reply goes something like “Yeah, it’s way beyond our current capabilities, but lots of things that once looked impossible are now feasible because people worked really hard on them for a long time, and we don’t think we can get the whole world to promise never to build AGI just because it’s hard to make safe, so we’re going to give AGI safety a solid try for a few decades and see what can be discovered.” But that’s probably not all that reassuring. [Read more…]

The Antikythera Mechanism

From Murray’s Human Accomplishment:

The problem with the standard archaeological account of human accomplishment from [the ancient world] is not that the picture is incomplete (which is inevitable), but that the data available to us leave so many puzzles.

The Antikythera Mechanism is a case in point… The Antikythera Mechanism is a bronze device about the size of a brick. It was recovered in 1901 from the wreck of a trading vessel that had sunk near the southern tip of Greece sometime around –65. Upon examination, archaeologists were startled to discover imprints of gears in the corroded metal. So began a half-century of speculation about what purpose the device might have served.

Finally, in 1959, science historian Derek de Solla Price figured it out: the Antikythera Mechanism was a mechanical device for calculating the positions of the sun and moon. A few years later, improvements in archaeological technology led to gamma radiographs of the Mechanism, revealing 22 gears in four layers, capable of simulating several major solar and lunar cycles, including the 19-year Metonic cycle that brings the phases of the moon back to the same calendar date. What made this latter feat especially astonishing was not just that the Mechanism could reproduce the 235 lunations in the Metonic cycle, but that it used a differential gear to do so. Until then, it was thought that the differential gear had been invented in 1575.

See also Wikipedia.

Stanovich on intelligence enhancement

From Stanovich’s popular book on the distinction between rationality and intelligence (p. 196):

In order to illustrate the oddly dysfunctional ways that rationality is devalued in comparison to intelligence…  Baron asks us to imagine what would happen if we were able to give everyone an otherwise harmless drug that increased their algorithmic-level cognitive capacities (for example, discrimination speed, working memory capacity, decoupling ability) — in short, that increased their intelligence…

Imagine that everyone in North America took the pill before retiring and then woke up the next morning with more memory capacity and processing speed. Both Baron and I believe that there is little likelihood that much would change the next day in terms of human happiness. It is very unlikely that people would be better able to fulfill their wishes and desires the day after taking the pill. In fact, it is quite likely that people would simply go about their usual business-only more efficiently. If given more memory capacity and processing speed, people would, I believe: carry on using the same ineffective medical treatments because of failure to think of alternative causes (Chapter 10); keep making the same poor financial decisions because of overconfidence (Chapter 8); keep misjudging environmental risks because of vividness (Chapter 6); play host to the contaminated mindware of Ponzi and pyramid schemes (Chapter 11); be wrongly influenced in their jury decisions by incorrect testimony about probabilities (Chapter 10); and continue making many other of the suboptimal decisions described in earlier chapters. The only difference would be that they would be able to do all of these things much more quickly!

This is part of why it’s not obvious to me that radical intelligence amplification (e.g. via IES) would increase rather than decrease our odds of surviving future powerful technologies.

Elsewhere (p. 171), Stanovich notes:

Mensa is a club restricted to high-IQ individuals, and one must pass IQ-type tests to be admitted. Yet 44 percent of the members of this club believed in astrology, 51 percent believed in biorhythms, and 56 percent believed in the existence of extraterrestrial visitors-all beliefs for which there is not a shred of evidence.

Assorted links

Nicely put, FHI

Re-reading Ross Andersen’s piece on Nick Bostrom and FHI for Aeon magazine, I was struck by several nicely succinct explanations given by FHI researchers — ones which I’ll borrowing for my own conversations with people about these topics:

“There is a concern that civilisations might need a certain amount of easily accessible energy to ramp up,” Bostrom told me. “By racing through Earth’s hydrocarbons, we might be depleting our planet’s civilisation startup-kit. But, even if it took us 100,000 years to bounce back, that would be a brief pause on cosmic time scales.”

“Human brains are really good at the kinds of cognition you need to run around the savannah throwing spears,” Dewey told me. “But we’re terrible at [many other things]… Think about how long it took humans to arrive at the idea of natural selection. The ancient Greeks had everything they needed to figure it out. They had heritability, limited resources, reproduction and death. But it took thousands of years for someone to put it together. If you had a machine that was designed specifically to make inferences about the world, instead of a machine like the human brain, you could make discoveries like that much faster.”

“The difference in intelligence between humans and chimpanzees is tiny,” [Armstrong] said. “But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list. That tells us it’s possible for a relatively small intelligence advantage to quickly compound and become decisive.”

“The basic problem is that the strong realisation of most motivations is incompatible with human existence,” Dewey told me. “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.”

[Bostrom] told me that when he was younger, he was more interested in the traditional philosophical questions… “But then there was this transition, where it gradually dawned on me that not all philosophical questions are equally urgent,” he said. “Some of them have been with us for thousands of years. It’s unlikely that we are going to make serious progress on them in the next ten. That realisation refocused me on research that can make a difference right now. It helped me to understand that philosophy has a time limit.”

Assorted links

Assorted links

Why Engines before Nanosystems?

After Drexler published his 1981 nanotech paper in PNAS, and after it received some positive followups in Nature and in Science in 1983, why did Drexler next write a popular book like Engines of Creation (1986) instead of a technical account like Nanosystems (1992)? Ed Regis writes in Nano (p. 118):

The logical next step for Drexler… was to produce a full-blown account of his molecular-engineering scheme, a technical document that fleshed out the whole story in chapter and verse, with all the technical details. That was the obvious thing to do, anyway, if he wanted to convince the greater science and engineering world that molecular engineering was a real prospect and not just his own private fantasy.

… Drexler instead did something else, spending the next four years, essentially, writing a popular account of the subject in his book, Engines of Creation.

For a dyed-in-the-wool engineer such as himself, this was somewhat puzzling. Why go public with a scheme as wild and woolly as this one before the technical details were even passably well worked out? Why paint vivid word pictures of “the coming era of nanotechnology” before even so much as one paltry designer protein had been coaxed, tricked, or forced into existence? Why not nail down an ironclad scientific case for the whole thing first, and only then proceed to advertise its benefits?

Of course, there were answers. For one thing, Drexler was convinced that he’d already done enough in his PNAS piece to motivate a full course of research-and-development work in academia and industry. After all, he’d described what was possible at the molecular level and by what means, and he’d said what some of the benefits were. How could a bunch of forward-looking researchers, seeing all this, not go ahead and actually do it?…

The other reason for writing a popular book on the subject was to raise some of the economic and social issues involved. Scientists and engineers, it was commonly observed, did not have an especially good track record when it came to assessing the wider impact of what they’d wrought in the lab. Their attitude seemed to be: “We invent it, you figure out what to do with it.”

To Drexler, that was the height of social irresponsibility, particularly where nanotechnology was concerned, because its impacts would be so broad and sweeping…

If anything was clear to Eric Drexler, it was that if the human race was to survive the transition to the nanotech era, it would have to do a bit of thinking beforehand. He’d have to write the book on this because, all too obviously, nobody else was about to.

But there was yet a third reason for writing Engines of Creation, a reason that was, for Drexler, probably the strongest one of all. This was to announce to the world at large that the issue of “limits” [from Limits to Growth] had been addressed head-on…

It’s hard to contain information hazards

Laurie Garrett’s Foreign Affairs piece on synbio from a while back exaggerates the state of current progress, but it also contains some good commentary on the difficulty of containing hazardous materials when those hazardous materials — unlike the case of nuclear fissile materials — are essentially information:

Fouchier and Kawaoka drew the wrath of many national security and public health experts, who demanded to know how the deliberate creation of potential pandemic flu strains could possibly be justified… the National Science Advisory Board for Biosecurity… [ordered] that the methods used to create these new mammalian forms of H5N1 never be published. “It’s not clear that these particular [experiments] have created something that would destroy the world; maybe it’ll be the next set of experiments that will be critical,” [Paul] Keim told reporters. “And that’s what the world discussion needs to be about.”

In the end, however, the December 2011 do-not-publish decision… was reversed… [and] both papers were published in their entirety by Science and Nature in 2012, and [the] temporary moratorium on dual-use research on influenza viruses was eventually lifted… Osterholm, Keim, and most of the vocal opponents of the work retreated, allowing the advisory board to step back into obscurity.

… What stymies the very few national security and law enforcement experts closely following this biological revolution is the realization that the key component is simply information. While virtually all current laws in this field, both local and global, restrict and track organisms of concern (such as, say, the Ebola virus), tracking information is all but impossible. Code can be buried anywhere — al Qaeda operatives have hidden attack instructions inside porn videos, and a seemingly innocent tweet could direct readers to an obscure Internet location containing genomic code ready to be downloaded to a 3-D printer. Suddenly, what started as a biology problem has become a matter of information security.

See also Bostrom, “Information Hazards” (2011).

MIRI’s original environmental policy

Somehow MIRI’s mission comes in at #10 on this list of 10 responses to the technological unemployment problem.

I suppose technically, Friendly AI is a solution for all the things. :)

This reminds me of the first draft of MIRI’s environmental policy, which read:

[MIRI] exists to ensure that the creation of smarter than human intelligence benefits society. Because societies depend on their environment to thrive, one implication of our core mission is a drive to ensure that when advanced intelligence technologies become available, they are used to secure the continued viability and resilience of the environment.

Many advanced artificial intelligences (AIs) will have instrumental goals to capture as many resources as possible for their own use, because resources are useful for a broad range of possible AI goals. To ensure that Earth’s resources are used wisely despite the creation of advanced AIs, we must discover how to design these AIs so that they can be given final goals which accord with humane values.

Though poorly designed AIs may pose a risk to the resources and environment on which humanity depends, more carefully designed AIs may be our best solution to long-term environmental concerns. To whatever extent we have goals for environmental sustainability, they are goals that can be accomplished to greater degrees using sufficiently advanced intelligence.

To prevent environmental disasters caused by poorly designed AIs, and to ensure that we one day have the intelligence needed to solve our current environmental dilemmas, [MIRI] is committed to discovering the principles of safe, beneficial AI that will one day allow us all to safeguard our environment as well as our future.

In the end, though, we decided to go with a more conventional (super-boring) environmental policy, available here.

Assorted links