Some books I’m looking forward to, October 2015 edition

* = added this round

Books, music, etc. from September 2015


I thoroughly enjoyed MacFarquhar’s Strangers Drowning. It does contain at least one error:

[Stephanie] didn’t know what [“bigger” thing she should be doing]… [maybe] preventing malevolent computers from attacking mankind, like the people at the Machine Intelligence Research Institute?


Music I most enjoyed discovering this month:


Ones I really liked, or loved:

Dietterich and Horvitz on AI risk

Tom Dietterich and Eric Horvitz have a new opinion piece in Communications of the ACM: Rise of Concerns about AI. Below, I comment on a few passages from the article.


Several of these speculations envision an “intelligence chain reaction,” in which an AI system is charged with the task of recursively designing progressively more intelligent versions of itself and this produces an “intelligence explosion.”

I suppose you could “charge” an advanced AI with the task of undergoing an intelligence explosion, but that seems like an incredibly reckless thing for someone to do. More often, the concern is about intelligence explosion as a logical byproduct of the convergent instrumental goal for self-improvement. Nearly all possible goals are more likely to be achieved if the AI can first improve its capabilities, whether the goal is calculating digits of Pi or optimizing a manufacturing process. This is the argument given in the book Dietterich and Horvitz cite for these concerns: Nick Bostrom’s Superintelligence.


[Intelligence explosion] runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning…

I follow this literature pretty closely, and I haven’t heard of this result. No citation is provided, so I don’t know what they’re talking about. I doubt this is the kind of thing you can show using computational complexity theory, given how under-specified the concept of intelligence explosion is.

Fortunately, Dietterich and Horvitz do advocate several lines of research to make AI systems more safe and secure, and they also say:

we believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Pedro Domingos on AI risk

Pedro Domingos, an AI researcher at the University of Washington and the author of The Master Algorithm, on the podcast Talking Machines:

There are these fears that computers are going to get very smart and then suddenly they’ll become conscious and they’ll take over the world and enslave us or destroy us like the Terminator. This is completely absurd. But even though it’s completely absurd, you see a lot of it in the media these days…

Domingos doesn’t identify which articles he’s talking about, but nearly all the articles like this that I’ve seen lately are inspired by comments on AI risk from Stephen Hawking, Elon Musk, and Bill Gates, which in turn are (as far as I know) informed by Nick Bostrom’s Superintelligence.

None of these people, as far as I know, have expressed a concern that machines will suddenly become conscious and then take over the world. Rather, these people are concerned with the risks posed by extreme AI competence, as AI scientist Stuart Russell explains.

I don’t know what source Domingos read that talked about machines suddenly becoming conscious and taking over the world. I don’t think I’ve seen such scenarios described outside of fiction.

Anyway, in his book, Domingos does seem to be familiar with the competence concern:

The point where we could turn off all our computers without causing the collapse of modern civilization has long passed. Machine learning is the last straw: if computers can start programming themselves, all hope of controlling them is surely lost. Distinguished scientists like Stephen Hawking have called for urgent research on this issue before it’s too late.

Relax. The chances that an AI equipped with the [ultimate machine learning algorithm] will take over the world are zero. The reason is simple: unlike humans, computers don’t have a will of their own. They’re products of engineering, not evolution. Even an infinitely powerful computer would still be only an extension of our will and nothing to fear…

[AI systems] can vary what they do, even come up with surprising plans, but only in service of the goals we set them. A robot whose programmed goal is “make a good dinner” may decide to cook a steak, a bouillabaisse, or even a delicious new dish of its own creation, but it can’t decide to murder its owner any more than a car can decide to fly away.

…[The] biggest worry is that, like the proverbial genie, the machines will give us what we ask for instead of what we want. This is not a hypothetical scenario; learning algorithms do it all the time. We train a neural network to recognize horses, but it learns instead to recognize brown patches, because all the horses in its training set happened to be brown.

I was curious to see what his rebuttal to the competence concern (“machines will give us what we ask for instead of what we want”) was, but this section just ends with:

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.

Which isn’t very clarifying, especially since elsewhere he writes that “any sufficiently advanced AI is indistinguishable from God.”

The next section is about Kurzweil’s law of accelerating returns, and doesn’t seem to address the competence concern.

So… I guess I can’t tell why Domingos thinks the chances of a global AI catastrophe are “zero,” and I can’t tell what he thinks of the basic competence concern expressed by Hawking, Musk, Gates, Russell, Bostrom, etc.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Unabashedly emotional or catchy avant-garde music

Holden wrote me a fictional conversation to illustrate his experience of trying to find music that is (1) complex, (2) structurally interesting, and yet (3) listenable / emotional / catchy (at least in some parts):

Holden: I’m bored by pop music. Got anything interesting?

Person: Here, try this 7-second riff played repeatedly for 26 minutes.

Holden: Umm … but what about … something a little more varied?

Person: Check out 38 minutes of somebody screaming incoherently while 5 incompatible instruments play random notes and a monotone voice recites surreal poetry.

Holden: But like … uh … more listenable maybe?

Person: I thought you didn’t want pop bullshit. Well, here’s something middlebrow: a guy playing 3 chords on a guitar who sounds kind of sarcastic.

Holden’s three criteria describe a great deal of my favorite music, much of which is scattered throughout my guides to modern classical music and modern art jazz. So if those criteria sound good to you, too, then I’ve listed below a few musical passages you might like.

Osvaldo Golijov, The Dreams and Prayers of Isaac the Blind, I. Agitato, 4:53-7:45

A string quartet + clarinet piece with a distinctly Jewish sound which, in this passage, sounds to me like a scene of building tension and frantic activity until all falls away (6:23) and the clarinet screams prayers of desperation to God (6:59).

Carla Bley, Escalator Over the Hill, Hotel Overture, 6:26-10:30

A circus-music refrain ambles along until things slow down (7:40) and a sax begins to solo (7:45) over repeating fatalistic-sounding chords in a way that, like the clarinet in the passage above, sounds to me like a cry of desperation, one with a cracking voice (e.g. at 8:03 & 8:09), and, at times, non-tonal gargled screaming (8:32), finally fading back into earlier themes from the overture (9:45).

Arvo Pärt, Tabula Rasa, Ludus, 4:10-9:52

Violins swirl around chords that seem to endlessly descend until a period of relative quiet (4:50) accented by bells. The earlier pattern returns (5:26), eventually picking up pace (5:44), until a momentary return to the calm of the bells (6:14). Then another return to the swirling violins (6:55), which again pick up their pace but this time with a thundering crash (7:15) that foreshadows the destruction that lies ahead. The violins ascend to a peak (7:55), and then quiver as they fall — farther and farther — until booming chords (8:44) announce the final desperate race (8:49) to the shattering end (9:36). If this doesn’t move you, you might be dead.

Sergey Kuryokhin, The Sparrow Oratorium, Summer, 0:55-4:36

Squeaky strings wander aimlessly until the piece suddenly jumps into a rollicking riff (1:11) that will be repeated throughout the piece. Variations on this riff continue as a high-gain guitar plays a free jazz solo. The solo ends (2:30), the noise builds, and then suddenly transitions (2:46) to a silly refrain of “zee zee zee zee…” and other vocalizations and then (3:17) a female pop singer with a soaring chorus that bleeds into (4:05) a variation on the original riff with sparse instrumentation which then launches into a louder, fuller-sounding version of the riff (4:20). (To me, this track is more catchy than emotional.)

John Adams, Harmonielehre, 1st movement, 12:19-16:18

Melancholy strings descend, but there is tension in the mood, announced by an ominous trill (12:45), and then another (12:51). But then, the mood lifts with piano and woodwinds (13:03) repeating an optimistic chord. The music accelerates, and takes another tonal shift toward a tense alert (13:22). Booming brass and drums enter (13:41) as things continue to accelerate, and the drums and brass strike again (14:29) and drag the whole piece down with them, in pitch and pace. The strings and horns struggle to rise again until the horns soar free (15:11) . The instruments rise and accelerate again until they break through to the upper atmosphere (15:32). Then they pull back, as if they see something up ahead, and… BOOM (16:04) there are the thundering E minor chords the opened the piece, here again to close the movement.

Artistic greatness, according to my brain: a first pass

Why do some pieces of music, or film, or visual art strike me as “great works of art,” independent of how much I enjoy them? When I introspect on this, and when I test my brain’s mostly subconsious “artistic greatness function” against a variety of test cases, it seems to me that the following features play a major role in my artistic greatness judgments:

  1. Innovation
  2. Cohesion
  3. Execution
  4. Emotionality

Below is a brief sketch of what I mean by each of these. In the future, I’ll elaborate on various points, and run additional thought experiments as I try to work toward reflective equilibrium for my aesthetic judgments.

[Read more…]

Anthony Braxton albums

I recently listened to 70 albums by Anthony Braxton, several of them very long, between 2 and 12 CDs. I thought a few of his albums were pretty great as works of art, but I enjoyed exactly one of them (Eugene), and I thought zero of them were accessible enough to include in my guide.

I should also mention that I basically only listened to the albums with original compositions on them. And if you know Braxton, you know these compositions are complicated — e.g. Composition 82, scored for simultaneous performance by four different orchestras. The dude is prolific.

To see which Braxton albums I listened to, Ctrl+F on my (in-progress) jazz guide for “Anthony Braxton’s” (it’s in a footnote).


Effective altruism definitions

Everyone has their own, it seems:

…the basic tenet of Effective Altruism: leading an ethical life involves using a portion of personal assets and resources to effectively alleviate the consequences of extreme poverty.

From the latest Life You Can Save newsletter.

Tracks or albums pushing musical boundaries

Here’s a playlist of tracks or albums pushing musical boundaries, released in 2012 or later:

This list is exclusive to rock-descended music. My knowledge of jazz and contemporary classical is less comprehensive than my knowledge of rock and its descendents, so I’m less able to tell what is genuinely new for jazz and contemporary classical.

And no, I don’t know why the artists named above all begin with a letter in the first half of the alphabet.

Elon Musk on saving the world

On The Late Show, Stephen Colbert asked Elon Musk if he was trying to save the world. The obvious, transparent answer is “yes.” But Elon’s reply was “Well, I’m trying to do useful things.” Perhaps Elon’s PR person told him that trying to save the world comes off as arrogant, even if you’re a billionaire.


“Fewer Data, Please”

The BMJ has some neat features, such as paper-specific “instant responses” and published peer-review correspondence.

The latter feature allowed me to discover that in their initial “revise and resubmit” comments on a recent meta-analysis on sugar-sweetened beverages and type 2 diabetes, the BMJ manuscript committee requested the study’s authors to provide fewer data:

There is a very large number of supplementary files, tables and diagrams. It would be helpful if these could be reduced to the most important and essential supplementary items.

What? Why? Are they going to run out of server space? Give me ALL THE DATA! Finding the data I want in a huge 30mb supplementary data file is still much easier than asking the corresponding author for it 3 years later.

Is this kind of reviewer feedback common? I thought the top journals were generally on board with the open science trend.

Some books I’m looking forward to, September 2015 edition

* = added this round

Books, music, etc. from August 2015


López’s Dog Whistle Politics was rarely persuasive. A lot of stuff like “Reagan said these two race-baiting things, and then people voted for him and didn’t mind his regressive tax policies, because they were racist and fell for his dog whistle statements.” I assume lots of Americans are fairly racist, and I assume politicians use racist dog whistles from time to time, but I don’t know how important those dog whistles are for American politics, and López didn’t put much effort into supporting his claims on that question.


Music I most enjoyed discovering this month:


Ones I really liked, or loved:

Yudkowsky on science and programming

There are serious disanalogies, too, but I still like this, from Eliezer Yudkowsky:

Lots of superforecasters [link] are programmers, it turns out, presumably for the same reason lots of programmers are correct contrarians of any other stripe. (My hypothesis is a mix of a lawful thinking gear, real intellectual difficulty of daily practice, and the fact that the practice of debugging is the only profession that has a fast loop for hypothesis formulation, testing, and admission of error. Programming is vastly more scientific than academic science.)

And later:

You’d need to have lived the lives of Newton, Lavoisier, Einstein, Fermi, and Kahneman all put together to be proven wrong about as many facts as a programmer unlearns in one year of debugging, though admittedly they’d be deeper and more emotionally significant facts.

Operators sleeping at the wrong time

An interesting sentence from Monk (2012):

Perhaps the two most dramatic examples of operators being asleep when they should have been awake are the airliner full of passengers that over-flew its U.S. West Coast airport and headed out over the Pacific with all of the flight crew asleep, and the Peach Bottom nuclear power plant in Pennsylvania, where regulators made an unexpected visit one night only to find everyone in the control room asleep (Lauber & Kayten, 1988).

Films I’m looking forward to

  • Swanberg, Digging for Fire (Aug 21, 2015)
  • Perry, Queen of Earth (Aug 26, 2015)
  • Villeneuve, Sicario (Sep 25, 2015)
  • Mendes, Spectre (Nov 6, 2015)
  • Haynes, Carol (Nov 20, 2015)
  • Sohn, The Good Dinosaur (Nov 25, 2015)
  • Abrams, The Force Awakens (Dec 18, 2015)
  • Tarantino, The Hateful Eight (Dec 25, 2015)
  • Russell, Joy (Dec 25, 2015)
  • Iñárritu, The Revenant (Dec 25, 2015)
  • Coen brothers, Hail, Caesar! (Feb 5, 2016)
  • Nichols, Midnight Special (Mar 18, 2016)
  • Stanton, Finding Dory (Jun 17, 2016)
  • Edwards, Rogue One (Dec 16, 2016)
  • Scorsese, Silence (TBD 2016)
  • Reeves, War of the Planet of the Apes (Jul 14, 2017)
  • Cameron, Avatar 2 (Dec 2017)
  • Audiard, Dheepan (TBD)
  • Haneke, Flashmob (TBD)
  • Dardenne brothers, The Unknown Girl (TBD)

Replies to people who argue against worrying about long-term AI safety risks today

More replies will be added here as I remember or discover them. To focus on the “modern” discussion, I’ll somewhat-arbitrarily limit this to replies to comments or articles that were published after the release of Bostrom’s Superintelligence on Sep. 3rd, 2014. Please remind me which ones I’m forgetting.

By me

  • My reply to critics in’s “Myth of AI” discussion. (Timelines, malevolence confusion, convergent instrumental goals.)
  • My reply to AI researcher Andrew Ng. (Timelines, malevolence confusion.)
  • My reply to AI researcher Oren Etzioni. (Timelines, convergent instrumental goals.)
  • My reply to economist Alex Tabarrok. (Timelines, glide scenario.)
  • My reply to AI researcher David Buchanan. (Consciousness confusion.)
  • My reply to physicist Lawrence Krauss. (Power requirements.)
  • My reply to AI researcher Jeff Hawkins. (Self-replication, anthropomorphic AI, intelligence explosion, timelines.)
  • My reply to AI researcher Pedro Domingos. (Consciousness confusion? Not sure.)

By others

  • Stuart Russell replies to critics in’s “Myth of AI” discussion. (Convergent instrumental goals.)
  • Rob Bensinger replies to computer scientist Ernest Davis. (Intelligence explosion, AGI capability, value learning.)
  • Rob Bensinger replies to roboticist Rodney Brooks and philosopher John Searle. (Narrow AI, timelines, malevolence confusion.)
  • Scott Alexander replies to technologist and novelist Ramez Naam and others. (Mainstream acceptance of AI risks.)
  • Olle Häggström replies to nuclear security specialist Edward Moore Geist. (Plausibility of superhuman AI, goal content integrity.)


Some books I’m looking forward to, August 2015 edition

* = added this round

Books, music, etc. from July 2015


Minger’s Death by Food Pyramid has some good warnings against the missteps of the nutrition profession, government nutrition recommendations, and fad diets. Minger is mostly excited by Weston Price ideas about nutrition. I haven’t examined that evidence base, but I’d be surprised if e.g. we actually had decent measures of the rates of cancer, etc. in the populations Price visited. His work might elevate some hypotheses to the level of “Okay, we should test this,” in which case my question is “Have we done those RCTs yet?”

Ansari & Klinenberg’s Modern Romance was mildly amusing but not very good.


This month I again listened to dozens of jazz albums while working on my in-progress jazz guide. This month, I started finally got to the stage where I hadn’t heard many of the albums, so I had lots of new encounters with albums I enjoyed a lot:

Albums I liked a lot, from other genres:


Ones I really liked, or loved:

  • Andrey Zvyagintsev, Leviathan (2014)
  • Noah Baumbach, While We’re Young (2014)
  • Abderrahmane Sissako, Timbuktu (2014)
  • Judd Apatow, Trainwreck (2015)

Wiener on the AI control problem in 1960

Norbert Wiener in Science in 1960:

Similarly, when a machine constructed by us is capable of operating on its incoming data at a pace which we cannot keep, we may not know, until too late, when to turn it off. We all know the fable of the sorcerer’s apprentice, in which the boy makes the broom carry water in his master’s absence, so that it is on the point of drowning him when his master reappears. If the boy had had to seek a charm to stop the mischief in the grimoires of his master’s library, he might have been drowned before he had discovered the relevant incantation. Similarly, if a bottle factory is programmed on the basis of maximum productivity, the owner may be made bankrupt by the enormous inventory of unsalable bottles manufactured before he learns he should have stopped production six months earlier.

The “Sorcerer’s Apprentice” is only one of many tales based on the assumption that the agencies of magic are literal-minded. There is the story of the genie and the fisherman in the Arabian Nights, in which the fisher- man breaks the seal of Solomon which has imprisoned the genie and finds the genie vowed to his own destruction; there is the tale of the “Monkey’s Paw,” by W. W. Jacobs, in which the sergeant major brings back from India a talisman which has the power to grant each of three people three wishes. Of the first recipient of this talisman we are told only that his third wish is for death. The sergeant major, the second person whose wishes are granted, finds his experiences too terrible to relate. His friend, who receives the talisman, wishes first for £200. Shortly thereafter, an official of the factory in which his son works comes to tell him that his son has been killed in the machinery and that, without any admission of responsibility, the company is sending him as consolation the sum of £200. His next wish is that his son should come back, and the ghost knocks at the door. His third wish is that the ghost should go away.

Disastrous results are to be expected not merely in the world of fairy tales but in the real world wherever two agencies essentially foreign to each other are coupled in the attempt to achieve a common purpose. If the communication between these two agencies as to the nature of this purpose is incomplete, it must only be expected that the results of this cooperation will be unsatisfactory. If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.

Arthur Samuel replied in the same issue with “a refutation”:

A machine is not a genie, it does not work by magic, it does not possess a will… The “intentions” which the machine seems to manifest are the intentions of the human programmer, as specified in advance, or they are subsidiary intentions derived from these, following rules specified by the programmer… There is (and logically there must always remain) a complete hiatus between (i) any ultimate extension and elaboration in this process of carrying out man’s wishes and (ii) the development within the machine of a will of its own. To believe otherwise is either to believe in magic or to believe that the existence of man’s will is an illusion and that man’s actions are as mechanical as the machine’s. Perhaps Wiener’s article and my rebuttal have both been mechanistically determined, but this I refuse to believe.

An apparent exception to these conclusions might be claimed for projected machines of the so-called “neural net” type… Since the internal connections would be unknown, the precise behavior of the nets would be unpredictable and, therefore, potentially dangerous… If practical machines of this type become a reality we will have to take a much closer look at their implications than either Wiener or I have been able to do.