Books, music, etc. from July 2015

Books

Minger’s Death by Food Pyramid has some good warnings against the missteps of the nutrition profession, government nutrition recommendations, and fad diets. Minger is mostly excited by Weston Price ideas about nutrition. I haven’t examined that evidence base, but I’d be surprised if e.g. we actually had decent measures of the rates of cancer, etc. in the populations Price visited. His work might elevate some hypotheses to the level of “Okay, we should test this,” in which case my question is “Have we done those RCTs yet?”

Ansari & Klinenberg’s Modern Romance was mildly amusing but not very good.

Music

This month I again listened to dozens of jazz albums while working on my in-progress jazz guide. This month, I started finally got to the stage where I hadn’t heard many of the albums, so I had lots of new encounters with albums I enjoyed a lot:

Albums I liked a lot, from other genres:

Movies/TV

Ones I really liked, or loved:

  • Andrey Zvyagintsev, Leviathan (2014)
  • Noah Baumbach, While We’re Young (2014)
  • Abderrahmane Sissako, Timbuktu (2014)
  • Judd Apatow, Trainwreck (2015)

Wiener on the AI control problem in 1960

Norbert Wiener in Science in 1960:

Similarly, when a machine constructed by us is capable of operating on its incoming data at a pace which we cannot keep, we may not know, until too late, when to turn it off. We all know the fable of the sorcerer’s apprentice, in which the boy makes the broom carry water in his master’s absence, so that it is on the point of drowning him when his master reappears. If the boy had had to seek a charm to stop the mischief in the grimoires of his master’s library, he might have been drowned before he had discovered the relevant incantation. Similarly, if a bottle factory is programmed on the basis of maximum productivity, the owner may be made bankrupt by the enormous inventory of unsalable bottles manufactured before he learns he should have stopped production six months earlier.

The “Sorcerer’s Apprentice” is only one of many tales based on the assumption that the agencies of magic are literal-minded. There is the story of the genie and the fisherman in the Arabian Nights, in which the fisher- man breaks the seal of Solomon which has imprisoned the genie and finds the genie vowed to his own destruction; there is the tale of the “Monkey’s Paw,” by W. W. Jacobs, in which the sergeant major brings back from India a talisman which has the power to grant each of three people three wishes. Of the first recipient of this talisman we are told only that his third wish is for death. The sergeant major, the second person whose wishes are granted, finds his experiences too terrible to relate. His friend, who receives the talisman, wishes first for £200. Shortly thereafter, an official of the factory in which his son works comes to tell him that his son has been killed in the machinery and that, without any admission of responsibility, the company is sending him as consolation the sum of £200. His next wish is that his son should come back, and the ghost knocks at the door. His third wish is that the ghost should go away.

Disastrous results are to be expected not merely in the world of fairy tales but in the real world wherever two agencies essentially foreign to each other are coupled in the attempt to achieve a common purpose. If the communication between these two agencies as to the nature of this purpose is incomplete, it must only be expected that the results of this cooperation will be unsatisfactory. If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.

Arthur Samuel replied in the same issue with “a refutation”:

A machine is not a genie, it does not work by magic, it does not possess a will… The “intentions” which the machine seems to manifest are the intentions of the human programmer, as specified in advance, or they are subsidiary intentions derived from these, following rules specified by the programmer… There is (and logically there must always remain) a complete hiatus between (i) any ultimate extension and elaboration in this process of carrying out man’s wishes and (ii) the development within the machine of a will of its own. To believe otherwise is either to believe in magic or to believe that the existence of man’s will is an illusion and that man’s actions are as mechanical as the machine’s. Perhaps Wiener’s article and my rebuttal have both been mechanistically determined, but this I refuse to believe.

An apparent exception to these conclusions might be claimed for projected machines of the so-called “neural net” type… Since the internal connections would be unknown, the precise behavior of the nets would be unpredictable and, therefore, potentially dangerous… If practical machines of this type become a reality we will have to take a much closer look at their implications than either Wiener or I have been able to do.

Powerful musical contrasts

For a couple months I’d been listening almost exclusively to jazz music, while working on my jazz guide. Then on a whim I decided to listen to a song I hadn’t heard in a long time, Smashing Pumpkins’ brutal rocker “Geek USA,” and it absolutely blew my mind, as if I was listening to the first piece of rock music invented, in a world that had only previously known folk, classical, and jazz.

The experience reminded of my favorite scene from Back to the Future, where Marty — who has traveled back in time to 1955 — ends a 1950s rock-and-roll classic with an increasingly intense guitar solo that completely bewilders the 1950s crowd that has never heard hard rock virtuoso guitar before:

[Read more…]

Rhodes on nuclear security

Some quotes from Rhodes’ Arsenals of Folly.

#1:

In the 1950s, when the RBMK [nuclear power reactor] design was developed and approved, Soviet industry had not yet mastered the technology necessary to manufacture steel pressure vessels capacious enough to surround such large reactor cores. For that reason, among others, scientists, engineers, and managers in the Soviet nuclear-power industry had pretended for years that a loss-of-coolant accident was unlikely to the point of impossibility in an RBMK. They knew better. The industry had been plagued with disasters and near-disasters since its earliest days. All of them had been covered up, treated as state secrets; information about them was denied not only to the Soviet public but even to the industry’s managers and operators. Engineering is based on experience, including operating experience; treating design flaws and accidents as state secrets meant that every other similar nuclear-power station remained vulnerable and unprepared.

Unknown to the Soviet public and the world, at least thirteen serious power-reactor accidents had occurred in the Soviet Union before the one at Chernobyl. Between 1964 and 1979, for example, repeated fuel-assembly fires plagued Reactor Number One at the Beloyarsk nuclear-power plant east of the Urals near Novosibirsk. In 1975, the core of an RBMK reactor at the Leningrad plant partly melted down; cooling the core by flooding it with liquid nitrogen led to a discharge of radiation into the environment equivalent to about one-twentieth the amount that was released at Chernobyl in 1986. In 1982, a rupture of the central fuel assembly of Chernobyl Reactor Number One released radioactivity over the nearby bedroom community of Pripyat, now in 1986 once again exposed and at risk. In 1985, a steam relief valve burst during a shaky startup of Reactor Number One at the Balakovo nuclear-power plant, on the Volga River about 150 miles southwest of Samara, jetting 500-degree steam that scalded to death fourteen members of the start-up staff; despite the accident, the responsible official, Balakovo’s plant director, Viktor Bryukhanov, was promoted to supervise construction at Chernobyl and direct its operation.

[Read more…]

Chomsky on the Peters-Finkelstein affair

More Chomsky, again from Understanding Power (footnotes also reproduced):

Here’s a story which is really tragic… There was this best-seller a few years ago [in 1984], it went through about ten printings, by a woman named Joan Peters… called From Time Immemorial. It was a big scholarly-looking book with lots of footnotes, which purported to show that the Palestinians were all recent immigrants [i.e. to the Jewish-settled areas of the former Palestine, during the British mandate years of 1920 to 1948]. And it was very popular — it got literally hundreds of rave reviews, and no negative reviews: the Washington Post, the New York Times, everybody was just raving about it. Here was this book which proved that there were really no Palestinians! Of course, the implicit message was, if Israel kicks them all out there’s no moral issue, because they’re just recent immigrants who came in because the Jews had built up the country. And there was all kinds of demographic analysis in it, and a big professor of demography at the University of Chicago [Philip M. Hauser] authenticated it. That was the big intellectual hit for that year: Saul Bellow, Barbara Tuchman, everybody was talking about it as the greatest thing since chocolate cake.

[Read more…]

Car-hacking

From Wired:

[Remote-controlling a Jeep] is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone. Uconnect, an Internet-connected computer feature in hundreds of thousands of Fiat Chrysler cars, SUVs, and trucks, controls the vehicle’s entertainment and navigation, enables phone calls, and even offers a Wi-Fi hot spot. And thanks to one vulnerable element… Uconnect’s cellular connection also lets anyone who knows the car’s IP address gain access from anywhere in the country.

Schlosser on nuclear security

Some quotes from Schlosser’s Command and Control.

#1:

On January 23, 1961, a B-52 bomber took off from Seymour Johnson Air Force Base in Goldsboro, North Carolina, for an airborne alert… [Near] midnight… the boom operator of [a refueling] tanker noticed fuel leaking from the B-52’ s right wing. Spray from the leak soon formed a wide plume, and within two minutes about forty thousand gallons of jet fuel had poured from the wing. The command post at Seymour Johnson told the pilot, Major Walter S. Tulloch, to dump the rest of the fuel in the ocean and prepare for an emergency landing. But fuel wouldn’t drain from the tank inside the left wing, creating a weight imbalance. At half past midnight, with the flaps down and the landing gear extended, the B-52 went into an uncontrolled spin…

The B-52 was carrying two Mark 39 hydrogen bombs, each with a yield of 4 megatons. As the aircraft spun downward, centrifugal forces pulled a lanyard in the cockpit. The lanyard was attached to the bomb release mechanism. When the lanyard was pulled, the locking pins were removed from one of the bombs. The Mark 39 fell from the plane. The arming wires were yanked out, and the bomb responded as though it had been deliberately released by the crew above a target. The pulse generator activated the low-voltage thermal batteries. The drogue parachute opened, and then the main chute. The barometric switches closed. The timer ran out, activating the high-voltage thermal batteries. The bomb hit the ground, and the piezoelectric crystals inside the nose crushed. They sent a firing signal. But the weapon didn’t detonate.

Every safety mechanism had failed, except one: the ready/safe switch in the cockpit. The switch was in the SAFE position when the bomb dropped. Had the switch been set to GROUND or AIR, the X-unit would’ve charged, the detonators would’ve triggered, and a thermonuclear weapon would have exploded in a field near Faro, North Carolina…

The other Mark 39 plummeted straight down and landed in a meadow just off Big Daddy’s Road, near the Nahunta Swamp. Its parachutes had failed to open. The high explosives did not detonate, and the primary was largely undamaged…

The Air Force assured the public that the two weapons had been unarmed and that there was never any risk of a nuclear explosion. Those statements were misleading. The T-249 control box and ready/safe switch, installed in every one of SAC’s bombers, had already raised concerns at Sandia. The switch required a low-voltage signal of brief duration to operate — and that kind of signal could easily be provided by a stray wire or a short circuit, as a B-52 full of electronic equipment disintegrated midair.

A year after the North Carolina accident, a SAC ground crew removed four Mark 28 bombs from a B-47 bomber and noticed that all of the weapons were armed. But the seal on the ready/ safe switch in the cockpit was intact, and the knob hadn’t been turned to GROUND or AIR. The bombs had not been armed by the crew. A seven-month investigation by Sandia found that a tiny metal nut had come off a screw inside the plane and lodged against an unused radar-heating circuit. The nut had created a new electrical pathway, allowing current to reach an arming line— and bypass the ready/ safe switch. A similar glitch on the B-52 that crashed near Goldsboro would have caused a 4-megaton thermonuclear explosion. “It would have been bad news— in spades,” Parker F. Jones, a safety engineer at Sandia, wrote in a memo about the accident. “One simple, dynamo-technology, low-voltage switch stood between the United States and a major catastrophe!”

[Read more…]

Chomsky on overthrowing third world governments

Noam Chomsky is worth reading because he’s an articulate, well-informed, sources-citing defender of unconventional views rarely encountered in mainstream venues. It’s hard for me to evaluate his views because he isn’t very systematic in his presentations of evidence for his core political theses — but then, hardly anybody is. But whether his views are fair or not, I think it’s good to stick my head outside the echo chamber regularly.

Personally, I’m most interested in his perspectives on plutocracy, international relations, and state violence. On those topics, Understanding Power (+450 pages of footnotes) is a pretty good introduction to his views.

To give you a feel for the book, I’ll excerpt a passage from chapter 1 of Understanding Power about overthrowing third world governments. I’ve also reproduced the (renumbered) footnotes for this passage.

[Read more…]

Feinstein on the global arms trade

Some quotes from Feinstein’s The Shadow World: Inside the Global Arms Trade.

#1:

The £75m Airbus, painted in the colours of the [Saudi] Prince’s beloved Dallas Cowboys, was a gift from the British arms company BAE Systems. It was a token of gratitude for the Prince’s role, as son of the country’s Defence Minister, in the biggest arms deal the world has seen. The Al Yamamah – ‘the dove’ – deal signed between the United Kingdom and Saudi Arabia in 1985 was worth over £40bn. It was also arguably the most corrupt transaction in trading history. Over £1bn was paid into accounts controlled by Bandar. The Airbus – maintained and operated by BAE at least until 2007 – was a little extra, presented to Bandar on his birthday in 1988.

A significant portion of the more than £1bn was paid into personal and Saudi embassy accounts at the venerable Riggs Bank opposite the White House on Pennsylvania Avenue, Washington DC. The bank of choice for Presidents, ambassadors and embassies had close ties to the CIA, with several bank officers holding full agency security clearance. Jonathan Bush, uncle of the President, was a senior executive of the bank at the time. But Riggs and the White House were stunned by the revelation that from 1999 money had inadvertently flowed from the account of Prince Bandar’s wife to two of the fifteen Saudis among the 9/11 hijackers.

[Read more…]

Some books I’m looking forward to, July 2015 edition

Audio music explainers

If I had a lot more time, and the licenses to reproduce extended excerpts from tons of recorded music, the ideal versions of my beginner’s guides to modern classical music and art jazz would actually be audiobooks, with me talking for a bit, and then playing 30 seconds of some piece, and then explaining how it’s different from some other piece, and then playing that piece, and so on.

Such audiobooks do exist, and I’m going to call them audio music explainers — as opposed to e.g. text-based music explainers, like these, or interactive music explainers, like these (sorta).

Below are some examples, with Spotify links when available:

Do you know of others?

July links

Karnofsky, Has violence declined, when large-scale atrocities are systematically included?

Winners of the PROSE awards look fascinating.

Five big myths about techies and philanthropy.

Debate on effective altruism at Boston Review.

The /r/AskHistorians master book list.

How Near-Miss Events Amplify or Attenuate Risky Decision Making.

How do types affect (programming) productivity and correctness? A review of the empirical evidence.

What is your software project’s truck factor? How does it compare to those of popular GitHub applications?

Hacker can send fatal doses to hospital drug pumps. Because by default, everything you connect to the internet is hackable.

Lessons from the crypto wars of the 1990s.

 

AI stuff

Jacob Steinhardt: Long-Term and Short-Term Challenges to Ensuring the Safety of AI Systems.

New MIRI-relevant paper from Hutter’s lab: Sequential Extensions of Causal and Evidential Decision Theory.

An introduction to autonomy in weapons systems.

The winners of FLI’s grants competition for research on robust and beneficial AI have been announced.

Joshua Greene (Harvard) is seeking students who want to study AGI with him (presumably, AGI safety/values in particular, given Greene’s presence at FLI’s Puerto Rico conference).

New FLI open letter, this time on autonomous weapons.

New FLI FAQ on the AI open letter and the future of AI.

Deepmind runs their Atari player on a massively distributed computing architecture.

Books, music, etc. from June 2015

Books

I’m not Murray’s intended audience for By the People, but I found it pretty interesting after the first couple chapters, even though I probably agree with Murray about very little in the policy space.

I read the first 1/4 of Barzun’s From Dawn to Decadence. It’s fairly good for what it is, but it’s the “bunch of random cool stories about history” kind of macrohistory, not the data-driven kind of macrohistory I prefer, so I gave up on it.

Music

This month I listened to dozens of jazz albums while working on my in-progress jazz guide. I think I had heard most of them before, but it’s hard to remember which ones. My favorite listens I don’t think I’d heard before were:

Music I particularly enjoyed from other genres:

Movies/TV

I totally loved Inside Out (2015). It’s one of the contenders for best Pixar film ever.

TV’s Wayward Pines is badly written in some ways, but its 5th episode is one of the most satisfying mystery/puzzle resolutions I’ve ever seen. The first four episodes build up a bunch of bizarre mysteries, and then the 5th episode answers most of them in a way that is surprising and rule-constrained and non-arbitrary (e.g. not magic), which is something I see so rarely I can’t even remember the last time I saw it on film/TV.

June 2015 links, round 2

Authorea actually looks pretty awesome for collaborative research paper writing. (So far I’ve been using Overleaf and sometimes… shudder… Google Docs.)

Abstract of a satirical paper from SIVBOVIK 2014:

Besides myriad philosophical disputes, neither [frequentism nor Bayesianism] accurately describes how ordinary humans make inferences… To remedy this problem, we propose belief-sustaining (BS) inference, which makes no use of the data whatsoever, in order to satisfy what we call “the principle of least embarrassment.” This is a much more accurate description of human behavior. We believe this method should replace Bayesian and frequentist inference for economic and public health reasons.

Understanding statistics through interactive visualizations.

GiveWell shallow investigation of risks from atomically precise manufacturing.

My beginner’s guide to modern classical music is basically finished now, and won’t be changing much in the future.

Effective altruist philosophers.

Peter Singer’s Coursera course on effective altruism.

The top 10 mathematical achievements of the last 5ish years, maybe.

Unfortunate statistical terms.

 

AI stuff

Open letter on the digital economy, about tech unemployment etc. Carl Shulman comments.

Robot swordsman.

Robots falling down during the latest DARPA Robotics Challenge.

AI Impacts collected all known public predictions of AGI timing, both individual predictions and survey medians. Conclusions here.

Reply to Ng on AI risk

On a recent episode of the excellent Talking Machines podcast, guest Andrew Ng — one of the big names in deep learning — discussed long-term AI risk (starting at 32:35):

Ng: …There’s been this hype about AI superintelligence and evil robots taking over the world, and I think I don’t worry about that for the same reason I don’t worry about overpopulation on Mars… we haven’t set foot on the planet, and I don’t know how to productively work on that problem. I think AI today is becoming much more intelligent [but] I don’t see a realistic path right now for AI to become sentient — to become self-aware and turn evil and so on. Maybe, hundreds of years from now, someone will invent a new technology that none of us have thought of yet that would enable an AI to turn evil, and then obviously we have to act at that time, but for now, I just don’t see a way to productively work on the problem.

And the reason I don’t like the hype about evil killer robots and AI superintelligence is that I think it distracts us from a much more serious conversation about the challenge that technology poses, which is the challenge to labor…

Both Ng and the Talking Machines co-hosts talk as though Ng’s view is the mainstream view in AI, but — with respect to AGI timelines, at least — it isn’t.

In this podcast and elsewhere, Ng seems somewhat confident (>35%, maybe?) that AGI is “hundreds of years” away. This is somewhat out of sync with the mainstream of AI. In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 90% confident AGI will be built” was 2070. The median response for 50% confidence of AGI was 2050.

That’s a fairly large difference of opinion between the median top-notch AI scientist and Andrew Ng. Their probability distributions barely overlap at all (probably).

Of course if I was pretty confident that AGI was hundreds of years away, I would also suggest prioritizing other areas, plausibly including worries about technological unemployment. But as far as we can tell, very few top-notch AI scientists agree with Ng that AGI is probably more than a century away.

That said, I do think that most top-notch AI scientists probably would agree with Ng that it’s too early to productively tackle the AGI safety challenge, even though they’d disagree with him on AGI timelines. I think these attitudes — about whether there is productive work on the topic to be done now — are changing, but slowly.

I will also note that Ng doesn’t seem to understand the AI risks that people are concerned about. Approximately nobody is worried that AI is going to “become self-aware” and then “turn evil,” as I’ve discussed before.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Etzioni on AI risk

Back in December 2014, AI scientist Oren Etzioni wrote an article called “AI Won’t Exterminate Us — it Will Empower Us.” He opens by quoting the fears of Musk and Hawking, and then says he’s not worried. Why not?

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and… beat humans at their own game.

But of course, the people talking about AI as a potential existential risk aren’t worried about AIs creating their own goals, either. Instead, the problem is that an AI optimizing very competently for the goals we gave it presents a threat to our survival. (For details, read just about anything on the topic that isn’t a news story, from Superintelligence to Wait But Why to Wikipedia, or watch this talk by Stuart Russell.)

Etzioni continues:

…the emergence of “full artificial intelligence” over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us.

First, most of the people concerned about AI as a potential extinction risk don’t think “full artificial intelligence” (aka AGI) will arrive in the next 25 years, either.

Second, I think most of Etzioni’s colleagues in AI would disagree with his claim that the arrival of AGI within 25 years is “far less likely than an asteroid striking the earth and annihilating us” (in the same 25-year time horizon).

Step one: what do AI scientists think about the timing of AGI? In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 10% confident AGI will be built” was 2024. The median response for 50% confidence of AGI was 2050. So, top-of-the-field AI researchers tend to be somewhere between 10% and 50% confident that AGI will be built within Etzioni’s 25-year timeframe.

Step two: how likely is it that an asteroid will strike Earth and annihilate us in the next 25 years? The nice thing about this prediction is that we actually know quite a lot about how frequently large asteroids strike Earth. We have hundreds of millions of years’ worth of data. And even without looking at that data, we know that an asteroid large enough to “annihilate us” hasn’t struck Earth throughout all of primate history — because if it had, we wouldn’t be here! Also, NASA conducted a pretty thorough search for nearby asteroids a while back, and — long story short — they’re pretty confident they’ve identified all the civilization-ending asteroids nearby, and none of them are going to hit Earth. The probability of an asteroid annihilating us in the next 25 years is much, much smaller than 1%.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Some books I’m looking forward to, June 2015 edition

Reply to Tabarrok on AI risk

At Marginal Revolution, economist Alex Tabarrok writes:

Stephen Hawking fears that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk and Bill Gates offer similar warnings. Many researchers in artificial intelligence are less concerned primarily because they think that the technology is not advancing as quickly as doom scenarios imagine, as Ramez Naam discussed.

First, remember that Naam quoted only the prestigious AI scientists who agree with him, and conspicuously failed to mention that many prestigious AI scientists past and present have taken AI risk seriously.

Second, the common disagreement is not, primarily, about the timing of AGI. As I’ve explained many times before, the AI timelines of those talking about the long-term risk are not noticeably different from those of the mainstream AI community. (Indeed, both Nick Bostrom and myself, and many others in the risk-worrying camp, have later timelines than the mainstream AI community does.)

But the main argument of Tabarrok’s post is this:

Why should we be worried about the end of the human race? Oh sure, there are some Terminator like scenarios in which many future-people die in horrible ways and I’d feel good if we avoided those scenarios. The more likely scenario, however, is a glide path to extinction in which most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs. A few holdouts to the old ways would remain but birth rates would be low and the non-adapted would be regarded as quaint, as we regard the Amish today. Eventually the last humans would go extinct and 46andMe customers would kid each other over how much of their DNA was of the primitive kind while holo-commercials advertised products “so easy a homo sapiens could do it.”  I see nothing objectionable in this scenario.

The people who write about existential risk at FHI, MIRI, CSER, FLI, etc. tend not to be worried about Tabarrok’s “glide” scenario. Speaking for myself, at least, that scenario seems pretty desirable. I just don’t think it’s very likely, for reasons partially explained in books like SuperintelligenceGlobal Catastrophic Risks, and others.

(Note that although I work as a GiveWell research analyst, I do not study global catastrophic risks or AI for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Buchanan on AI risk

Back in February, The Washington Post posted an opinion article by David Buchanan of the IBM Watson team: “No, the robots are not going to rise up and kill you.”

From the title, you might assume “Okay, I guess this isn’t about the AI risk concerns raised by MIRI, FHI, Elon Musk, etc.” But in the opening paragraph, Buchanan makes clear he is trying to respond to those concerns, by linking here and here.

I am often suspicious that many people in the “nothing to worry about” camp think they are replying to MIRI & company but are actually replying to Hollywood.

And lo, when Buchanan explains the supposed concern about AI, he doesn’t link to anything by MIRI & company, but instead he literally links to IMDB pages for movies/TV about AI:

Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy…

The entire rest of the article is about the consciousness fallacy. But of course, everyone at MIRI and FHI, and probably Musk as well, agrees that intelligence doesn’t automatically create consciousness, and that has never been what MIRI & company are worried about.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)