Stanovich on intelligence enhancement

From Stanovich’s popular book on the distinction between rationality and intelligence (p. 196):

In order to illustrate the oddly dysfunctional ways that rationality is devalued in comparison to intelligence…  Baron asks us to imagine what would happen if we were able to give everyone an otherwise harmless drug that increased their algorithmic-level cognitive capacities (for example, discrimination speed, working memory capacity, decoupling ability) — in short, that increased their intelligence…

Imagine that everyone in North America took the pill before retiring and then woke up the next morning with more memory capacity and processing speed. Both Baron and I believe that there is little likelihood that much would change the next day in terms of human happiness. It is very unlikely that people would be better able to fulfill their wishes and desires the day after taking the pill. In fact, it is quite likely that people would simply go about their usual business-only more efficiently. If given more memory capacity and processing speed, people would, I believe: carry on using the same ineffective medical treatments because of failure to think of alternative causes (Chapter 10); keep making the same poor financial decisions because of overconfidence (Chapter 8); keep misjudging environmental risks because of vividness (Chapter 6); play host to the contaminated mindware of Ponzi and pyramid schemes (Chapter 11); be wrongly influenced in their jury decisions by incorrect testimony about probabilities (Chapter 10); and continue making many other of the suboptimal decisions described in earlier chapters. The only difference would be that they would be able to do all of these things much more quickly!

This is part of why it’s not obvious to me that radical intelligence amplification (e.g. via IES) would increase rather than decrease our odds of surviving future powerful technologies.

Elsewhere (p. 171), Stanovich notes:

Mensa is a club restricted to high-IQ individuals, and one must pass IQ-type tests to be admitted. Yet 44 percent of the members of this club believed in astrology, 51 percent believed in biorhythms, and 56 percent believed in the existence of extraterrestrial visitors-all beliefs for which there is not a shred of evidence.

Nicely put, FHI

Re-reading Ross Andersen’s piece on Nick Bostrom and FHI for Aeon magazine, I was struck by several nicely succinct explanations given by FHI researchers — ones which I’ll borrowing for my own conversations with people about these topics:

“There is a concern that civilisations might need a certain amount of easily accessible energy to ramp up,” Bostrom told me. “By racing through Earth’s hydrocarbons, we might be depleting our planet’s civilisation startup-kit. But, even if it took us 100,000 years to bounce back, that would be a brief pause on cosmic time scales.”

“Human brains are really good at the kinds of cognition you need to run around the savannah throwing spears,” Dewey told me. “But we’re terrible at [many other things]… Think about how long it took humans to arrive at the idea of natural selection. The ancient Greeks had everything they needed to figure it out. They had heritability, limited resources, reproduction and death. But it took thousands of years for someone to put it together. If you had a machine that was designed specifically to make inferences about the world, instead of a machine like the human brain, you could make discoveries like that much faster.”

“The difference in intelligence between humans and chimpanzees is tiny,” [Armstrong] said. “But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list. That tells us it’s possible for a relatively small intelligence advantage to quickly compound and become decisive.”

“The basic problem is that the strong realisation of most motivations is incompatible with human existence,” Dewey told me. “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.”

[Bostrom] told me that when he was younger, he was more interested in the traditional philosophical questions… “But then there was this transition, where it gradually dawned on me that not all philosophical questions are equally urgent,” he said. “Some of them have been with us for thousands of years. It’s unlikely that we are going to make serious progress on them in the next ten. That realisation refocused me on research that can make a difference right now. It helped me to understand that philosophy has a time limit.”

Why Engines before Nanosystems?

After Drexler published his 1981 nanotech paper in PNAS, and after it received some positive followups in Nature and in Science in 1983, why did Drexler next write a popular book like Engines of Creation (1986) instead of a technical account like Nanosystems (1992)? Ed Regis writes in Nano (p. 118):

The logical next step for Drexler… was to produce a full-blown account of his molecular-engineering scheme, a technical document that fleshed out the whole story in chapter and verse, with all the technical details. That was the obvious thing to do, anyway, if he wanted to convince the greater science and engineering world that molecular engineering was a real prospect and not just his own private fantasy.

… Drexler instead did something else, spending the next four years, essentially, writing a popular account of the subject in his book, Engines of Creation.

For a dyed-in-the-wool engineer such as himself, this was somewhat puzzling. Why go public with a scheme as wild and woolly as this one before the technical details were even passably well worked out? Why paint vivid word pictures of “the coming era of nanotechnology” before even so much as one paltry designer protein had been coaxed, tricked, or forced into existence? Why not nail down an ironclad scientific case for the whole thing first, and only then proceed to advertise its benefits?

Of course, there were answers. For one thing, Drexler was convinced that he’d already done enough in his PNAS piece to motivate a full course of research-and-development work in academia and industry. After all, he’d described what was possible at the molecular level and by what means, and he’d said what some of the benefits were. How could a bunch of forward-looking researchers, seeing all this, not go ahead and actually do it?…

The other reason for writing a popular book on the subject was to raise some of the economic and social issues involved. Scientists and engineers, it was commonly observed, did not have an especially good track record when it came to assessing the wider impact of what they’d wrought in the lab. Their attitude seemed to be: “We invent it, you figure out what to do with it.”

To Drexler, that was the height of social irresponsibility, particularly where nanotechnology was concerned, because its impacts would be so broad and sweeping…

If anything was clear to Eric Drexler, it was that if the human race was to survive the transition to the nanotech era, it would have to do a bit of thinking beforehand. He’d have to write the book on this because, all too obviously, nobody else was about to.

But there was yet a third reason for writing Engines of Creation, a reason that was, for Drexler, probably the strongest one of all. This was to announce to the world at large that the issue of “limits” [from Limits to Growth] had been addressed head-on…

It’s hard to contain information hazards

Laurie Garrett’s Foreign Affairs piece on synbio from a while back exaggerates the state of current progress, but it also contains some good commentary on the difficulty of containing hazardous materials when those hazardous materials — unlike the case of nuclear fissile materials — are essentially information:

Fouchier and Kawaoka drew the wrath of many national security and public health experts, who demanded to know how the deliberate creation of potential pandemic flu strains could possibly be justified… the National Science Advisory Board for Biosecurity… [ordered] that the methods used to create these new mammalian forms of H5N1 never be published. “It’s not clear that these particular [experiments] have created something that would destroy the world; maybe it’ll be the next set of experiments that will be critical,” [Paul] Keim told reporters. “And that’s what the world discussion needs to be about.”

In the end, however, the December 2011 do-not-publish decision… was reversed… [and] both papers were published in their entirety by Science and Nature in 2012, and [the] temporary moratorium on dual-use research on influenza viruses was eventually lifted… Osterholm, Keim, and most of the vocal opponents of the work retreated, allowing the advisory board to step back into obscurity.

… What stymies the very few national security and law enforcement experts closely following this biological revolution is the realization that the key component is simply information. While virtually all current laws in this field, both local and global, restrict and track organisms of concern (such as, say, the Ebola virus), tracking information is all but impossible. Code can be buried anywhere — al Qaeda operatives have hidden attack instructions inside porn videos, and a seemingly innocent tweet could direct readers to an obscure Internet location containing genomic code ready to be downloaded to a 3-D printer. Suddenly, what started as a biology problem has become a matter of information security.

See also Bostrom, “Information Hazards” (2011).

MIRI’s original environmental policy

Somehow MIRI’s mission comes in at #10 on this list of 10 responses to the technological unemployment problem.

I suppose technically, Friendly AI is a solution for all the things. :)

This reminds me of the first draft of MIRI’s environmental policy, which read:

[MIRI] exists to ensure that the creation of smarter than human intelligence benefits society. Because societies depend on their environment to thrive, one implication of our core mission is a drive to ensure that when advanced intelligence technologies become available, they are used to secure the continued viability and resilience of the environment.

Many advanced artificial intelligences (AIs) will have instrumental goals to capture as many resources as possible for their own use, because resources are useful for a broad range of possible AI goals. To ensure that Earth’s resources are used wisely despite the creation of advanced AIs, we must discover how to design these AIs so that they can be given final goals which accord with humane values.

Though poorly designed AIs may pose a risk to the resources and environment on which humanity depends, more carefully designed AIs may be our best solution to long-term environmental concerns. To whatever extent we have goals for environmental sustainability, they are goals that can be accomplished to greater degrees using sufficiently advanced intelligence.

To prevent environmental disasters caused by poorly designed AIs, and to ensure that we one day have the intelligence needed to solve our current environmental dilemmas, [MIRI] is committed to discovering the principles of safe, beneficial AI that will one day allow us all to safeguard our environment as well as our future.

In the end, though, we decided to go with a more conventional (super-boring) environmental policy, available here.

Another Cold War close call

From The Limits of Safety (p. 1):

On the night of October 25, 1962, an air force sentry was patrolling the perimeter of a military base near Duluth, Minnesota. It was the height of the Cuban missile crisis, and nuclear-armed bombers and interceptor aircraft, parked on air base runways and at commercial airports throughout the United States, were alert and ready for war. The sentry spotted someone climbing the fence, shot at the figure, and sounded the sabotage alarm. At airfields throughout the region, alarms went off, and armed guards rushed into the cold night to prevent Soviet agents from sabotaging U.S. nuclear forces.

At Volk Field in Wisconsin, however, the wrong alarm bell rang: the Klaxon signalling that nuclear had begun went off. Pilots ran to their nuclear-armed interceptors and started the engines. These men had been told that there would be no practice alert drills during the tense crisis, and they fully believed that a nuclear war was starting as they headed down the runway. Fortunately, the base commander contacted Duluth before the planes took off and discovered what had happened. An officer in the command post immediately drove his car onto the runway, flashing his lights and signaling the interceptors. The pilots saw him and stopped their aircraft. The suspected Soviet saboteur that caused the whole incident was, ironically, a bear.

 

Two innovative strategies in sports

From Gladwell’s David and Goliath:

A regulation basketball court is ninety-four feet long. Most of the time, a team would defend only about twenty-four feet of that, conceding the other seventy feet. Occasionally teams played a full-court press—that is, they contested their opponent’s attempt to advance the ball up the court. But they did it for only a few minutes at a time. It was as if there were a kind of conspiracy in the basketball world about the way the game ought to be played, Ranadivé thought, and that conspiracy had the effect of widening the gap between good teams and weak teams. Good teams, after all, had players who were tall and could dribble and shoot well; they could crisply execute their carefully prepared plays in their opponent’s end. Why, then, did weak teams play in a way that made it easy for good teams to do the very things that they were so good at?

Ranadivé looked at his girls. Morgan and Julia were serious basketball players. But Nicky, Angela, Dani, Holly, Annika, and his own daughter, Anjali, had never played the game before. They weren’t all that tall. They couldn’t shoot. They weren’t particularly adept at dribbling. They were not the sort who played pickup games at the playground every evening. Ranadivé lives in Menlo Park, in the heart of California’s Silicon Valley. His team was made up of, as Ranadivé put it, “little blond girls.” These were the daughters of nerds and computer programmers. They worked on science projects and read long and complicated books and dreamed about growing up to be marine biologists. Ranadivé knew that if they played the conventional way—if they let their opponents dribble the ball up the court without opposition—they would almost certainly lose to the girls for whom basketball was a passion. Ranadivé had come to America as a seventeen-year-old with fifty dollars in his pocket. He was not one to accept losing easily. His second principle, then, was that his team would play a real full-court press—every game, all the time. The team ended up at the national championships. “It was really random,” Anjali Ranadivé said. “I mean, my father had never played basketball before.”

From Brafman’s Sway:

The most flattering way to describe the Gator [football] team upon Spurrier’s arrival in 1990 was as a “fixer-upper.” The team had never won a conference title; in fact, it was on probation because of allegations of rule violations by the team’s former coach.

… Spurrier’s most important move was to identify a weak spot in the strategy employed by his opponents. For year the teams in the conference had adhered to a “war of attrition” game strategy: they called conservative plays and held on to the ball for as long as they could, hoping to win a defensive battle…

… Spurrier came to dominate the conference by… introducing what he called the ‘Fun-n-Gun” approach…

Spurrier mixed things up with a generous helping of “big chance plays, where you got to give your players a shot.” In other words, Spurrier’s team passed more often, played more aggressively, and tried to score more touchdowns.

..Spurrier gained an advantage because the other coaches were focused on trying to avoid a potential loss. Think of what it’s like to be a college football coach. As you walk around town, passing fans offer themselves up as instant experts on the game — never afraid to give you a piece of their minds on what you did wrong in yesterday’s match-up. You make one bad move and you get skewered by fans and commentators alike. Meanwhile, ticcket sale revenues, your school’s alumni fundraising, and your job all depend heavily on the football team’s success. All of that pressure adds up… the losses loom large…

You’d have thought that after losing a few games to a team like [the Gators]… the [other] coaches would have reevaluated their war-of-attrition model. But they didn’t. And so Spurrier and his Gators continued to dominate former powerhouses like Alabama, Tennessee, and Auburn. Over the next six years, the coach and his team went on to win four division titles, culminating in the national championship.

Feynman on dealing with nanotechnology risks

Nano (p. 113) quotes Eric Drexler describing the time he first met Richard Feynman at a party:

We talked about the PNAS article [on nanotechnology], and generally he indicated that, yeah, this was a sensible thing… at one point I was talking about the need for institutions to handle some of the problems [nanotechnology] raised, and [Feynman] remarked [that] institutions were made up of people and therefore of fools.

Feynman sounds downright Yudkowskian on this point, if you ask me. :)