Chomsky on the Peters-Finkelstein affair

More Chomsky, again from Understanding Power (footnotes also reproduced):

Here’s a story which is really tragic… There was this best-seller a few years ago [in 1984], it went through about ten printings, by a woman named Joan Peters… called From Time Immemorial. It was a big scholarly-looking book with lots of footnotes, which purported to show that the Palestinians were all recent immigrants [i.e. to the Jewish-settled areas of the former Palestine, during the British mandate years of 1920 to 1948]. And it was very popular — it got literally hundreds of rave reviews, and no negative reviews: the Washington Post, the New York Times, everybody was just raving about it. Here was this book which proved that there were really no Palestinians! Of course, the implicit message was, if Israel kicks them all out there’s no moral issue, because they’re just recent immigrants who came in because the Jews had built up the country. And there was all kinds of demographic analysis in it, and a big professor of demography at the University of Chicago [Philip M. Hauser] authenticated it. That was the big intellectual hit for that year: Saul Bellow, Barbara Tuchman, everybody was talking about it as the greatest thing since chocolate cake.

[Read more…]

Car-hacking

From Wired:

[Remote-controlling a Jeep] is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone. Uconnect, an Internet-connected computer feature in hundreds of thousands of Fiat Chrysler cars, SUVs, and trucks, controls the vehicle’s entertainment and navigation, enables phone calls, and even offers a Wi-Fi hot spot. And thanks to one vulnerable element… Uconnect’s cellular connection also lets anyone who knows the car’s IP address gain access from anywhere in the country.

Schlosser on nuclear security

Some quotes from Schlosser’s Command and Control.

#1:

On January 23, 1961, a B-52 bomber took off from Seymour Johnson Air Force Base in Goldsboro, North Carolina, for an airborne alert… [Near] midnight… the boom operator of [a refueling] tanker noticed fuel leaking from the B-52’ s right wing. Spray from the leak soon formed a wide plume, and within two minutes about forty thousand gallons of jet fuel had poured from the wing. The command post at Seymour Johnson told the pilot, Major Walter S. Tulloch, to dump the rest of the fuel in the ocean and prepare for an emergency landing. But fuel wouldn’t drain from the tank inside the left wing, creating a weight imbalance. At half past midnight, with the flaps down and the landing gear extended, the B-52 went into an uncontrolled spin…

The B-52 was carrying two Mark 39 hydrogen bombs, each with a yield of 4 megatons. As the aircraft spun downward, centrifugal forces pulled a lanyard in the cockpit. The lanyard was attached to the bomb release mechanism. When the lanyard was pulled, the locking pins were removed from one of the bombs. The Mark 39 fell from the plane. The arming wires were yanked out, and the bomb responded as though it had been deliberately released by the crew above a target. The pulse generator activated the low-voltage thermal batteries. The drogue parachute opened, and then the main chute. The barometric switches closed. The timer ran out, activating the high-voltage thermal batteries. The bomb hit the ground, and the piezoelectric crystals inside the nose crushed. They sent a firing signal. But the weapon didn’t detonate.

Every safety mechanism had failed, except one: the ready/safe switch in the cockpit. The switch was in the SAFE position when the bomb dropped. Had the switch been set to GROUND or AIR, the X-unit would’ve charged, the detonators would’ve triggered, and a thermonuclear weapon would have exploded in a field near Faro, North Carolina…

The other Mark 39 plummeted straight down and landed in a meadow just off Big Daddy’s Road, near the Nahunta Swamp. Its parachutes had failed to open. The high explosives did not detonate, and the primary was largely undamaged…

The Air Force assured the public that the two weapons had been unarmed and that there was never any risk of a nuclear explosion. Those statements were misleading. The T-249 control box and ready/safe switch, installed in every one of SAC’s bombers, had already raised concerns at Sandia. The switch required a low-voltage signal of brief duration to operate — and that kind of signal could easily be provided by a stray wire or a short circuit, as a B-52 full of electronic equipment disintegrated midair.

A year after the North Carolina accident, a SAC ground crew removed four Mark 28 bombs from a B-47 bomber and noticed that all of the weapons were armed. But the seal on the ready/ safe switch in the cockpit was intact, and the knob hadn’t been turned to GROUND or AIR. The bombs had not been armed by the crew. A seven-month investigation by Sandia found that a tiny metal nut had come off a screw inside the plane and lodged against an unused radar-heating circuit. The nut had created a new electrical pathway, allowing current to reach an arming line— and bypass the ready/ safe switch. A similar glitch on the B-52 that crashed near Goldsboro would have caused a 4-megaton thermonuclear explosion. “It would have been bad news— in spades,” Parker F. Jones, a safety engineer at Sandia, wrote in a memo about the accident. “One simple, dynamo-technology, low-voltage switch stood between the United States and a major catastrophe!”

[Read more…]

Chomsky on overthrowing third world governments

Noam Chomsky is worth reading because he’s an articulate, well-informed, sources-citing defender of unconventional views rarely encountered in mainstream venues. It’s hard for me to evaluate his views because he isn’t very systematic in his presentations of evidence for his core political theses — but then, hardly anybody is. But whether his views are fair or not, I think it’s good to stick my head outside the echo chamber regularly.

Personally, I’m most interested in his perspectives on plutocracy, international relations, and state violence. On those topics, Understanding Power (+450 pages of footnotes) is a pretty good introduction to his views.

To give you a feel for the book, I’ll excerpt a passage from chapter 1 of Understanding Power about overthrowing third world governments. I’ve also reproduced the (renumbered) footnotes for this passage.

[Read more…]

Feinstein on the global arms trade

Some quotes from Feinstein’s The Shadow World: Inside the Global Arms Trade.

#1:

The £75m Airbus, painted in the colours of the [Saudi] Prince’s beloved Dallas Cowboys, was a gift from the British arms company BAE Systems. It was a token of gratitude for the Prince’s role, as son of the country’s Defence Minister, in the biggest arms deal the world has seen. The Al Yamamah – ‘the dove’ – deal signed between the United Kingdom and Saudi Arabia in 1985 was worth over £40bn. It was also arguably the most corrupt transaction in trading history. Over £1bn was paid into accounts controlled by Bandar. The Airbus – maintained and operated by BAE at least until 2007 – was a little extra, presented to Bandar on his birthday in 1988.

A significant portion of the more than £1bn was paid into personal and Saudi embassy accounts at the venerable Riggs Bank opposite the White House on Pennsylvania Avenue, Washington DC. The bank of choice for Presidents, ambassadors and embassies had close ties to the CIA, with several bank officers holding full agency security clearance. Jonathan Bush, uncle of the President, was a senior executive of the bank at the time. But Riggs and the White House were stunned by the revelation that from 1999 money had inadvertently flowed from the account of Prince Bandar’s wife to two of the fifteen Saudis among the 9/11 hijackers.

[Read more…]

Some books I’m looking forward to, July 2015 edition

Audio music explainers

If I had a lot more time, and the licenses to reproduce extended excerpts from tons of recorded music, the ideal versions of my beginner’s guides to modern classical music and art jazz would actually be audiobooks, with me talking for a bit, and then playing 30 seconds of some piece, and then explaining how it’s different from some other piece, and then playing that piece, and so on.

Such audiobooks do exist, and I’m going to call them audio music explainers — as opposed to e.g. text-based music explainers, like these, or interactive music explainers, like these (sorta).

Below are some examples, with Spotify links when available:

Do you know of others?

July links

Karnofsky, Has violence declined, when large-scale atrocities are systematically included?

Winners of the PROSE awards look fascinating.

Five big myths about techies and philanthropy.

Debate on effective altruism at Boston Review.

The /r/AskHistorians master book list.

How Near-Miss Events Amplify or Attenuate Risky Decision Making.

How do types affect (programming) productivity and correctness? A review of the empirical evidence.

What is your software project’s truck factor? How does it compare to those of popular GitHub applications?

Hacker can send fatal doses to hospital drug pumps. Because by default, everything you connect to the internet is hackable.

Lessons from the crypto wars of the 1990s.

 

AI stuff

Jacob Steinhardt: Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems.

New MIRI-relevant paper from Hutter’s lab: Sequential Extensions of Causal and Evidential Decision Theory.

An introduction to autonomy in weapons systems.

The winners of FLI’s grants competition for research on robust and beneficial AI have been announced.

Joshua Greene (Harvard) is seeking students who want to study AGI with him (presumably, AGI safety/values in particular, given Greene’s presence at FLI’s Puerto Rico conference).

New FLI open letter, this time on autonomous weapons.

New FLI FAQ on the AI open letter and the future of AI.

Deepmind runs their Atari player on a massively distributed computing architecture.

Books, music, etc. from June 2015

Books

I’m not Murray’s intended audience for By the People, but I found it pretty interesting after the first couple chapters, even though I probably agree with Murray about very little in the policy space.

I read the first 1/4 of Barzun’s From Dawn to Decadence. It’s fairly good for what it is, but it’s the “bunch of random cool stories about history” kind of macrohistory, not the data-driven kind of macrohistory I prefer, so I gave up on it.

Music

This month I listened to dozens of jazz albums while working on my in-progress jazz guide. I think I had heard most of them before, but it’s hard to remember which ones. My favorite listens I don’t think I’d heard before were:

Music I particularly enjoyed from other genres:

Movies/TV

I totally loved Inside Out (2015). It’s one of the contenders for best Pixar film ever.

TV’s Wayward Pines is badly written in some ways, but its 5th episode is one of the most satisfying mystery/puzzle resolutions I’ve ever seen. The first four episodes build up a bunch of bizarre mysteries, and then the 5th episode answers most of them in a way that is surprising and rule-constrained and non-arbitrary (e.g. not magic), which is something I see so rarely I can’t even remember the last time I saw it on film/TV.

June 2015 links, round 2

Authorea actually looks pretty awesome for collaborative research paper writing. (So far I’ve been using Overleaf and sometimes… shudder… Google Docs.)

Abstract of a satirical paper from SIVBOVIK 2014:

Besides myriad philosophical disputes, neither [frequentism nor Bayesianism] accurately describes how ordinary humans make inferences… To remedy this problem, we propose belief-sustaining (BS) inference, which makes no use of the data whatsoever, in order to satisfy what we call “the principle of least embarrassment.” This is a much more accurate description of human behavior. We believe this method should replace Bayesian and frequentist inference for economic and public health reasons.

Understanding statistics through interactive visualizations.

GiveWell shallow investigation of risks from atomically precise manufacturing.

My beginner’s guide to modern classical music is basically finished now, and won’t be changing much in the future.

Effective altruist philosophers.

Peter Singer’s Coursera course on effective altruism.

The top 10 mathematical achievements of the last 5ish years, maybe.

Unfortunate statistical terms.

 

AI stuff

Open letter on the digital economy, about tech unemployment etc. Carl Shulman comments.

Robot swordsman.

Robots falling down during the latest DARPA Robotics Challenge.

AI Impacts collected all known public predictions of AGI timing, both individual predictions and survey medians. Conclusions here.

Reply to Ng on AI risk

On a recent episode of the excellent Talking Machines podcast, guest Andrew Ng — one of the big names in deep learning — discussed long-term AI risk (starting at 32:35):

Ng: …There’s been this hype about AI superintelligence and evil robots taking over the world, and I think I don’t worry about that for the same reason I don’t worry about overpopulation on Mars… we haven’t set foot on the planet, and I don’t know how to productively work on that problem. I think AI today is becoming much more intelligent [but] I don’t see a realistic path right now for AI to become sentient — to become self-aware and turn evil and so on. Maybe, hundreds of years from now, someone will invent a new technology that none of us have thought of yet that would enable an AI to turn evil, and then obviously we have to act at that time, but for now, I just don’t see a way to productively work on the problem.

And the reason I don’t like the hype about evil killer robots and AI superintelligence is that I think it distracts us from a much more serious conversation about the challenge that technology poses, which is the challenge to labor…

Both Ng and the Talking Machines co-hosts talk as though Ng’s view is the mainstream view in AI, but — with respect to AGI timelines, at least — it isn’t.

In this podcast and elsewhere, Ng seems somewhat confident (>35%, maybe?) that AGI is “hundreds of years” away. This is somewhat out of sync with the mainstream of AI. In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 90% confident AGI will be built” was 2070. The median response for 50% confidence of AGI was 2050.

That’s a fairly large difference of opinion between the median top-notch AI scientist and Andrew Ng. Their probability distributions barely overlap at all (probably).

Of course if I was pretty confident that AGI was hundreds of years away, I would also suggest prioritizing other areas, plausibly including worries about technological unemployment. But as far as we can tell, very few top-notch AI scientists agree with Ng that AGI is probably more than a century away.

That said, I do think that most top-notch AI scientists probably would agree with Ng that it’s too early to productively tackle the AGI safety challenge, even though they’d disagree with him on AGI timelines. I think these attitudes — about whether there is productive work on the topic to be done now — are changing, but slowly.

I will also note that Ng doesn’t seem to understand the AI risks that people are concerned about. Approximately nobody is worried that AI is going to “become self-aware” and then “turn evil,” as I’ve discussed before.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Etzioni on AI risk

Back in December 2014, AI scientist Oren Etzioni wrote an article called “AI Won’t Exterminate Us — it Will Empower Us.” He opens by quoting the fears of Musk and Hawking, and then says he’s not worried. Why not?

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and… beat humans at their own game.

But of course, the people talking about AI as a potential existential risk aren’t worried about AIs creating their own goals, either. Instead, the problem is that an AI optimizing very competently for the goals we gave it presents a threat to our survival. (For details, read just about anything on the topic that isn’t a news story, from Superintelligence to Wait But Why to Wikipedia, or watch this talk by Stuart Russell.)

Etzioni continues:

…the emergence of “full artificial intelligence” over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us.

First, most of the people concerned about AI as a potential extinction risk don’t think “full artificial intelligence” (aka AGI) will arrive in the next 25 years, either.

Second, I think most of Etzioni’s colleagues in AI would disagree with his claim that the arrival of AGI within 25 years is “far less likely than an asteroid striking the earth and annihilating us” (in the same 25-year time horizon).

Step one: what do AI scientists think about the timing of AGI? In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 10% confident AGI will be built” was 2024. The median response for 50% confidence of AGI was 2050. So, top-of-the-field AI researchers tend to be somewhere between 10% and 50% confident that AGI will be built within Etzioni’s 25-year timeframe.

Step two: how likely is it that an asteroid will strike Earth and annihilate us in the next 25 years? The nice thing about this prediction is that we actually know quite a lot about how frequently large asteroids strike Earth. We have hundreds of millions of years’ worth of data. And even without looking at that data, we know that an asteroid large enough to “annihilate us” hasn’t struck Earth throughout all of primate history — because if it had, we wouldn’t be here! Also, NASA conducted a pretty thorough search for nearby asteroids a while back, and — long story short — they’re pretty confident they’ve identified all the civilization-ending asteroids nearby, and none of them are going to hit Earth. The probability of an asteroid annihilating us in the next 25 years is much, much smaller than 1%.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Some books I’m looking forward to, June 2015 edition

Reply to Tabarrok on AI risk

At Marginal Revolution, economist Alex Tabarrok writes:

Stephen Hawking fears that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk and Bill Gates offer similar warnings. Many researchers in artificial intelligence are less concerned primarily because they think that the technology is not advancing as quickly as doom scenarios imagine, as Ramez Naam discussed.

First, remember that Naam quoted only the prestigious AI scientists who agree with him, and conspicuously failed to mention that many prestigious AI scientists past and present have taken AI risk seriously.

Second, the common disagreement is not, primarily, about the timing of AGI. As I’ve explained many times before, the AI timelines of those talking about the long-term risk are not noticeably different from those of the mainstream AI community. (Indeed, both Nick Bostrom and myself, and many others in the risk-worrying camp, have later timelines than the mainstream AI community does.)

But the main argument of Tabarrok’s post is this:

Why should we be worried about the end of the human race? Oh sure, there are some Terminator like scenarios in which many future-people die in horrible ways and I’d feel good if we avoided those scenarios. The more likely scenario, however, is a glide path to extinction in which most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs. A few holdouts to the old ways would remain but birth rates would be low and the non-adapted would be regarded as quaint, as we regard the Amish today. Eventually the last humans would go extinct and 46andMe customers would kid each other over how much of their DNA was of the primitive kind while holo-commercials advertised products “so easy a homo sapiens could do it.”  I see nothing objectionable in this scenario.

The people who write about existential risk at FHI, MIRI, CSER, FLI, etc. tend not to be worried about Tabarrok’s “glide” scenario. Speaking for myself, at least, that scenario seems pretty desirable. I just don’t think it’s very likely, for reasons partially explained in books like SuperintelligenceGlobal Catastrophic Risks, and others.

(Note that although I work as a GiveWell research analyst, I do not study global catastrophic risks or AI for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Buchanan on AI risk

Back in February, The Washington Post posted an opinion article by David Buchanan of the IBM Watson team: “No, the robots are not going to rise up and kill you.”

From the title, you might assume “Okay, I guess this isn’t about the AI risk concerns raised by MIRI, FHI, Elon Musk, etc.” But in the opening paragraph, Buchanan makes clear he is trying to respond to those concerns, by linking here and here.

I am often suspicious that many people in the “nothing to worry about” camp think they are replying to MIRI & company but are actually replying to Hollywood.

And lo, when Buchanan explains the supposed concern about AI, he doesn’t link to anything by MIRI & company, but instead he literally links to IMDB pages for movies/TV about AI:

Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy…

The entire rest of the article is about the consciousness fallacy. But of course, everyone at MIRI and FHI, and probably Musk as well, agrees that intelligence doesn’t automatically create consciousness, and that has never been what MIRI & company are worried about.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

June 2015 links

Men have more hand-grip strength than women (on average), to an even greater degree than I thought.

I now have a Goodreads profile. I’m not going to bother making it exhaustive.

100+ interesting data sets for statistics.

Five silly fonts inspired by mathematical theorems or open problems.

Pre-registration prizes!

Critique of trim-and-fill as a technique for correcting for publication bias.

Interesting recent interview with Ioannidis.

I have begun work on A beginner’s guide to modern art jazz.

Data analysis subcultures.

Scraping for Journalism, a guide by ProPublica.

 

AI stuff

Scott Alexander, “AI researchers on AI risk” and “No time like the present for AI safety work.”

Stuart Russell & others in Nature on autonomous weapons.

MIT’s Cheetah robot now autonomously jumps over obstacles (vide0), and an injured robot learns how to limp.

Scherer’s “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies” discusses AI regulation possibilities in the context of both medium-term and long-term challenges, including superintelligence. I remain agnostic about whether regulation would be helpful at this stage.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Morris’ thesis in Foragers, Farmers, and Fossil Fuels

From the opening of chapter 5:

I suggested that modern human values initially emerged somewhere around 100,000 years ago (±50,000 years) as a consequence of the biological evolution of our big, fast brains, and that once we had our big, fast brains, cultural evolution became a possibility too. Because of cultural evolution, human values have mutated rapidly in the last twenty thousand years, and the pace of change has accelerated in the last two hundred years.

I identified three major stages in human values, which I linked to foraging, farming, and fossil-fuel societies. My main point was that in each case, modes of energy capture determined population size and density, which in turn largely determined which forms of social organization worked best, which went on to make certain sets of values more successful and attractive than others.

Foragers, I observed, overwhelmingly lived in small, low-density groups, and generally saw political and wealth hierarchies as bad things. They were more tolerant of gender hierarchy, and (by modern lights) surprisingly tolerant of violence. Farmers lived in bigger, denser communities, and generally saw steep political, wealth, and gender hierarchies as fine. They had much less patience than foragers, though, for interpersonal violence, and restricted its range of legitimate uses more narrowly. Fossil-fuel folk live in bigger, denser communities still. They tend to see political and gender hierarchy as bad things, and violence as particularly evil, but they are generally more tolerant of wealth hierarchies than foragers, although not so tolerant as farmers.

Books, music, etc. from May 2015

Books

Learn or Die has some good chapters on Bridgewater, the rest is meh.

I read about half of Foragers, Farmers, and Fossil Fuels. The parts I read were good, but I lost interest because the book confirmed for me that we don’t have good evidence about the values of people over most of history, and so even the most well-argued book possible on the subject couldn’t be all that compelling. We have much better evidence for the historical measures used in Morris’ earlier book, The Measure of Civilization.

Consciousness and the Brain has several good chapters, and some chapters that are a bit too excited about the author’s personal favorite theory of consciousness.

The Sixth Extinction was an enjoyable read, but don’t go in expecting any argument.

Favorite tracks or albums discovered this month

Favorite movies discovered this month

Other

Of course the big news for me this month was that I took a new job at GiveWell, leaving MIRI in the capable hands of Nate Soares.

Videogames as art

I still basically agree with this 4-minute video essay I produced way back in 2008:

Transcript

When the motion picture was invented, critics considered it an amusing toy. They didn’t see its potential to be an art form like painting or music. But only a few decades later, film was in some ways the ultimate art — capable of passion, lyricism, symbolism, subtlety, and beauty. Film could combine the elements of all other arts — music, literature, poetry, dance, staging, fashion, and even architecture — into it single, awesome work. Of course, film will always be used for silly amusements, but it can also express the highest of art. Film has come of age.

In the 1960s, computer programmers invented another amusing toy: the videogame. Nobody thought it could be a serious art form, and who could blame them? Super Mario Brothers didn’t have much in common with Citizen Kane. And, nobody was even trying to make artistic games. Companies just wanted to make fun play things that would sell lots of copies.

But recently, games have started to look a lot more like the movies, and people wondered: “Could this become a serious art form, like film?” In fact, some games basically were films with tiny better gameplay snuck in.

Of course, there is one major difference between films and games. Film critic Roger Ebert thinks games can never be an art form because

Videogames by their nature require player choices, which is the opposite the strategy of serious film and literature, which requires at authorial control.

But wait a minute. Aren’t there already serious art forms that allow for flexibility, improvisation, and player choices? Bach and Mozart and other composers famously left room for improvisation in their classical compositions. And of course jazz music is an art form based almost entirely on improvisation within a set of scales or modes or ideas. Avant-garde composers Christian Wolff and John Zorn write “game pieces” in which there are no prearranged notes at all. Performers play according to an unfolding set of rules exactly as in baseball or Mario. So gameplay can be art.

Maybe the real reason some people don’t think games are an art form is that they don’t know any artistic video games. Even the games with impressive graphic design and good music have pretty hokey stories and unoriginal drive-jump-shoot gameplay. And for the most part they’re right: there aren’t many artistic games. Games are only just becoming an art form. It took film a while to become art, too.

But maybe the skeptics haven’t played the right games, either. Have they played Shadow of the Colossus, a minimalist epic of beauty and philosophy? Have they played Façade, a one-act play in which the player tries to keep a couple together by listening to their dialogue, reading their facial expressions, and responding in natural language? Have they seen The Night Journey, by respected video artist Bill Viola, which intends to symbolize a mystic’s path towards enlightenment?

It’s an exciting time for video games. They will continue to deliver simple fun and blockbuster entertainment, but there is also an avant-garde movement of serious artists who are about to launch the medium to new heights of expression, and I for one can’t wait to see what they come up with.

F.A.Q. about my transition to GiveWell

Lots of people are asking for more details about my decision to take a job at GiveWell, so I figured I should publish answers to the most common questions I’ve gotten, though I’m happy to also talk about it in person or by email.

Why did you take a job at GiveWell?

Apparently some people think I must have changed my mind about what I think Earth’s most urgent priorities are. So let me be clear: Nothing has changed about what I think Earth’s most urgent priorities are.

I still buy the basic argument in Friendly AI research as effective altruism.

I still think that growing a field of technical AI alignment research, one which takes the future seriously, is plausibly the most urgent task for those seeking a desirable long-term future for Earth-originating life.

And I still think that MIRI has an incredibly important role to play in growing that field of technical AI alignment research.

I decided to take a research position at GiveWell mostly for personal reasons.

I have always preferred research over management. As many of you who know me in person already know, I’ve been looking for my replacement at MIRI since the day I took the Executive Director role, so that I could return to research. When doing research I very easily get into a flow state; I basically never get into a flow state doing management. I’m pretty proud of what the MIRI team accomplished during my tenure, and I could see myself being an executive somewhere again some day, but I want to do something else for a while.

Why not switch to a research role at MIRI? First, I continue to think MIRI should specialize in computer science research that I don’t have the training to do myself. Second, I look forward to upgrading my research skills while working in domains where I don’t already have lots of pre-existing bias.

[Read more…]