15 classical music traditions, compared

Other Classical Musics argues that there are at least 15 musical traditions around the world worthy of the title “classical music”:

According to our rule-of-thumb, a classical music will have evolved… where a wealthy class of connoisseurs has stimulated its creation by a quasi-priesthood of professionals; it will have enjoyed high social esteem. It will also have had the time and space to develop rules of composition and performance, and to allow the evolution of a canon of works, or forms… our definition does imply acceptance of a ‘classical/ folk-popular’ divide. That distinction is made on the assumption that these categories simply occupy opposite ends of a spectrum, because almost all classical music has vernacular roots, and periodically renews itself from them…

In one of the earliest known [Western] definitions, classique is translated as ‘classical, formall, orderlie, in due or fit ranke; also, approved, authenticall, chiefe, principall’. The implication there was: authority, formal discipline, models of excellence. A century later ‘classical’ came to stand also for a canon of works in performance. Yet almost every non-Western culture has its own concept of ‘classical’ and many employ criteria similar to the European ones, though usually with the additional function of symbolizing national culture…

By definition, the conditions required for the evolution of a classical music don’t exist in newly-formed societies: hence the absence of a representative tradition from South America.

I don’t understand the book’s criteria. E.g. jazz is included despite not having been created by “a quasi-priesthood of professionals” funded by “a wealthy class of connoisseurs,” and despite having been invented relatively recently, in the early 20th century.

[Read more…]

Technology forecasts from The Year 2000

In The Age of Em, Robin Hanson is pretty optimistic about our ability to forecast the long-term future:

Some say that there is little point in trying to foresee the non-immediate future. But in fact there have been many successful forecasts of this sort.

In the rest of this section, Hanson cites eight examples of forecasting success. Two of his examples of “success” are forecasts of technologies that haven’t arrived yet: atomically precise manufacturing and advanced starships. Another of his examples is The Year 2000:

A particularly accurate book in predicting the future was The Year 2000, a 1967 book by Herman Kahn and Anthony Wiener (Kahn and Wiener 1967). It accurately predicted population, was 80% correct for computer and communication technology, and 50% correct for other technology (Albright 2002).

As it happens, when I first read this paragraph I had already begun to evaluate the technology forecasts from The Year 2000 for the Open Philanthropy Project, relying on the same source Hanson did for determining which forecasts came true and which did not (Albright 2002).

However, my assessment of Kahn & Wiener’s forecasting performance is much less rosy than Hanson’s. For details, see here.

Philosophical habits of mind

In an interesting short paper from 1993, Bernard Baars and Katharine McGovern list several philosophical “habits of mind” and contrast them with typical scientific habits of mind. The philosophical habits of mind they list, somewhat paraphrased, are:

  1. A great preference for problems that have survived centuries of debate, largely intact.
  2. A tendency to set the most demanding criteria for success, rather than more achievable ones.
  3. Frequent appeal to thought experiments (rather than non-intuitional evidence) to carry the major burden of argument.
  4. More focus on rhetorical brilliance than testability.
  5. A delight in paradoxes and “impossibility proofs.”
  6. Shifting, slippery definitions.
  7. A tendency to legislate the empirical sciences.

I partially agree with this list, and would add several items of my own.

Obviously this list does not describe all of philosophy. Also, I think (English-language) philosophy as a whole has become more scientific since 1993.

Rapoport’s First Rule and Efficient Reading

Philosopher Daniel Dennett advocates following “Rapoport’s Rules” when writing critical commentary. He summarizes the first of Rapoport’s Rules this way:

You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

If you’ve read many scientific and philosophical debates, you’re aware that this rule is almost never followed. And in many cases it may be inappropriate, or not worth the cost, to follow it. But for someone like me, who spends a lot of time trying to quickly form initial impressions about the state of various scientific or philosophical debates, it can be incredibly valuable and time-saving to find a writer who follows Rapoport’s First Rule, even if I end up disagreeing with that writer’s conclusions.

One writer who, in my opinion, seems to follow Rapoport’s First Rule unusually well is Dennett’s “arch-nemesis” on the topic of consciousness, the philosopher David Chalmers. Amazingly, even Dennett seems to think that Chalmers embodies Rapoport’s 1st Rule. Dennett writes:

Chalmers manifestly understands the arguments [for and against type-A materialism, which is Dennett’s view]; he has put them as well and as carefully as anybody ever has… he has presented excellent versions of [the arguments for type-A materialism] himself, and failed to convince himself. I do not mind conceding that I could not have done as good a job, let alone a better job, of marshaling the grounds for type-A materialism. So why does he cling like a limpet to his property dualism?

As far as I can tell, Dennett is saying “Thanks, Chalmers, I wish I’d thought of putting the arguments for my view that way.”

And because of Chalmers’ clarity and fairness, I have found Chalmers’ writings on consciousness to be more efficiently informative than Dennett’s, even though my own current best-guesses about the nature of consciousness are much closer to Dennett’s than to Chalmers’.

Contrast this with what I find to be more typical in the consciousness literature (and in many other literatures), which is for an article’s author(s) to present as many arguments as they can think of for their own view, and downplay or mischaracterize or not-even-mention the arguments against their view.

I’ll describe one example, without naming names. Recently I read two recent papers, each of which had a section discussing the evidence for or against the “cortex-required view,” which is the view that a cortex is required for phenomenal consciousness. (I’ll abbreviate it as “CRV.”)

The pro-CRV paper is written as though it’s a closed case that a cortex is required for consciousness, and it doesn’t cite any of the literature suggesting the opposite. Meanwhile, the anti-CRV paper is written as though it’s a closed case that a cortex isn’t required for consciousness, and it doesn’t cite any literature suggesting that it is required. Their differing passages on CRV cite literally zero of the same sources. Each paper pretends as though the entire body of literature cited by the other paper just doesn’t exist.

If you happened to read only one of these papers, you’d come a way with a very skewed view of the likelihood of the cortex-required view. You might realize how skewed that view is later, but if you’re reading only a few papers on the topic, so that you can form an impression quickly, you might not.

So here’s one tip for digging through some literature quickly: try to find out which expert(s) on that topic, if any, seem to follow Rapoport’s First Rule — even if you don’t find their conclusions compelling.

Seeking case studies in scientific reduction and conceptual evolution

Tim Minchin once said “Every mystery ever solved has turned out to be not magic.” One thing I want to understand better is “How, exactly, has that happened in history? In particular, how have our naive pre-scientific concepts evolved in response to, or been eliminated by, scientific progress?

Examples: What is the detailed story of how “water” came to be identified with H2O? How did our concept of “heat” evolve over time, including e.g. when we split it off from our concept of “temperature”? What is the detailed story of how “life” came to be identified with a large set of interacting processes with unclear edge cases such as viruses decided only by convention? What is the detailed story of how “soul” was eliminated from our scientific ontology rather than being remapped onto something “conceptually close” to our earlier conception of it, but which actually exists?

I wish there was a handbook of detailed case studies in scientific reductionism from a variety of scientific disciplines, but I haven’t found any such book yet. The documents I’ve found that are closest to what I want are perhaps:

Some semi-detailed case studies also show up in Kuhn, Feyerabend, etc. but they are typically buried in a mass of more theoretical discussion. I’d prefer to read histories that focus on the historical developments.

Got any such case studies, or collections of case studies, to recommend?

The Big Picture

Sean Carroll’s The Big Picture is a pretty decent “worldview naturalism 101” book.

In case there’s a 2nd edition in the future, and in case Carroll cares about the opinions of a professional dilettante (aka a generalist research analyst without even a bachelor’s degree), here are my requests for the 2nd edition:

  • I think Carroll is too quick to say which physicalist approach to phenomenal consciousness is correct, and doesn’t present alternate approaches as compellingly as he could (before explaining why he rejects them). (See especially chs. 41-42.)
  • In the chapter on death, I wish Carroll had acknowledged that neither physics nor naturalism requires that we live lives as short as we now do, and that there are speculative future technological capabilities that might allow future humans (or perhaps some now living) to live very long lives (albeit not infinitely long lives).
  • I wish Carroll had mentioned Tegmark levels, maybe in chs. 25 or 36.

Tetlock wants suggestions for strong AI signposts

In my 2013 article on strong AI forecasting, I made several suggestions for how to do better at forecasting strong AI, including this suggestion quoted from Phil Tetlock, arguably the leading forecasting researcher in the world:

Signposting the future: Thinking through specific scenarios can be useful if those scenarios “come with clear diagnostic signposts that policymakers can use to gauge whether they are moving toward or away from one scenario or another… Falsifiable hypotheses bring high-flying scenario abstractions back to Earth.”

Tetlock hadn’t mentioned strong AI at the time, but now it turns out he wants suggestions for strong AI signposts that could be forecast on GJOpen, the forecasting tournament platform.

Specifying crisply formulated signpost questions is not easy. If you come up with some candidates, consider posting them in the comments below. After a while, I will collect them all together and send them to Tetlock. (I figure that’s probably better than a bunch of different people sending Tetlock individual emails with overlapping suggestions.)

Tetlock’s framework for thinking about such signposts is described in Superforecasting:

In the spring of 2013 I met with Paul Saffo, a Silicon Valley futurist and scenario consultant. Another unnerving crisis was brewing on the Korean peninsula, so when I sketched the forecasting tournament for Saffo, I mentioned a question IARPA had asked: Will North Korea “attempt to launch a multistage rocket between 7 January 2013 and 1 September 2013?” Saffo thought it was trivial. A few colonels in the Pentagon might be interested, he said, but it’s not the question most people would ask. “The more fundamental question is ‘How does this all turn out?’ ” he said. “That’s a much more challenging question.”

So we confront a dilemma. What matters is the big question, but the big question can’t be scored. The little question doesn’t matter but it can be scored, so the IARPA tournament went with it. You could say we were so hell-bent on looking scientific that we counted what doesn’t count.

That is unfair. The questions in the tournament had been screened by experts to be both difficult and relevant to active problems on the desks of intelligence analysts. But it is fair to say these questions are more narrowly focused than the big questions we would all love to answer, like “How does this all turn out?” Do we really have to choose between posing big and important questions that can’t be scored or small and less important questions that can be? That’s unsatisfying. But there is a way out of the box.

Implicit within Paul Saffo’s “How does this all turn out?” question were the recent events that had worsened the conflict on the Korean peninsula. North Korea launched a rocket, in violation of a UN Security Council resolution. It conducted a new nuclear test. It renounced the 1953 armistice with South Korea. It launched a cyber attack on South Korea, severed the hotline between the two governments, and threatened a nuclear attack on the United States. Seen that way, it’s obvious that the big question is composed of many small questions. One is “Will North Korea test a rocket?” If it does, it will escalate the conflict a little. If it doesn’t, it could cool things down a little. That one tiny question doesn’t nail down the big question, but it does contribute a little insight. And if we ask many tiny-but-pertinent questions, we can close in on an answer for the big question. Will North Korea conduct another nuclear test? Will it rebuff diplomatic talks on its nuclear program? Will it fire artillery at South Korea? Will a North Korean ship fire on a South Korean ship? The answers are cumulative. The more yeses, the likelier the answer to the big question is “This is going to end badly.”

I call this Bayesian question clustering because of its family resemblance to the Bayesian updating discussed in chapter 7. Another way to think of it is to imagine a painter using the technique called pointillism. It consists of dabbing tiny dots on the canvas, nothing more. Each dot alone adds little. But as the dots collect, patterns emerge. With enough dots, an artist can produce anything from a vivid portrait to a sweeping landscape.

There were question clusters in the IARPA tournament, but they arose more as a consequence of events than a diagnostic strategy. In future research, I want to develop the concept and see how effectively we can answer unscorable “big questions” with clusters of little ones.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Time to proof for well-specified problems

How much time usually elapses between when a technical problem is posed and when it is solved? How much effort is usually required? Which variables most predict how much time and effort will be required to solve a technical problem?

The main paper I’ve seen on this is Hisano & Sornette (2013). Their method was to start with Wikipedia’s List of conjectures and then track down the year each conjecture was first stated and the year it was solved (or, whether it remains unsolved). They were unable to determine exact-year values for 16 conjectures, leaving them with a dataset of 144 conjectures, of which 60 were solved as of January 2012, with 84 still unsolved. The time between first conjecture statement and first solution is called “time to proof.”

For the purposes of finding possible data-generating models that fit the data described above, they assume the average productivity per mathematician is constant throughout their career (they didn’t try to collect more specific data), and they assume the number of active mathematicians tracks with total human population — i.e., roughly exponential growth over the time period covered by these conjectures and proofs (because again, they didn’t try to collect more specific data).

I didn’t try to understand in detail how their model works or how reasonable it is, but as far as I understand it, here’s what they found:

  • Since 1850, the number of new conjectures (that ended up being listed on Wikipedia) has tripled every 55 years. This is close to the average growth rate of total human population over the same time period.
  • Given the incompleteness of the data and the (assumed) approximate exponential growth of the mathematician population, they can’t say anything confident about the data-generating model, and therefore basically fall back on Occam: “we could not reject the simplest model of an exponential rate of conjecture proof with a rate of 0.01/year for the dataset (translating into an average waiting time to proof of 100 years).”
  • They expect the Wikipedia dataset severely undersamples “the many conjectures whose time-to-proof is in the range of years to a few decades.”
  • They use their model to answer the question that prompted the paper, which was about the probability that “P vs. NP” will be solved by 2024. Their model says there’s a 41.3% chance of that, which intuitively seems high to me.
  • They make some obvious caveats to all this: (1) the content of the conjecture matters for how many mathematician-hours are devoted to solving it, and how quickly they are devoted; (2) to at least a small degree, the notion of “proof” has shifted over time, e.g. the first proof of the four-color theorem still has not been checked from start to finish by humans, and is mostly just assumed to be correct; (3) some famous conjectures might be undecidable, leaving some probability mass for time-to-proof at infinity.

What can we conclude from this?

Not much. Sometimes crisply-posed technical problems are solved quickly, sometimes they take many years or decades to solve, sometimes they take more than a century to solve, and sometimes they are never solved, even with substantial effort being targeted at the problem.

And unfortunately, it looks like we can’t say much more than that from this study alone. As they say, their observed distribution of time to proof must be considered with major caveats. Personally, I would emphasize the likely severe undersampling of conjectures with short times-to-proof, the fact that they didn’t try to weight data points by how important the conjectures were perceived to be or how many resources went into solving them (because doing so would be very hard!), and the fact that they didn’t have enough data points (especially given the non-stationary number of mathematicians) to confirm or reject ~any of the intuitively / a priori plausible data-generating models.

Are there other good articles on “time to proof” or “time to solution” for relatively well-specified research problems, in mathematics or other fields? If you know of any, please let me know!

Reply to LeCun on AI safety

On Facebook, AI scientist Yann LeCun recently posted the following:

<not_being_really_serious>
I have said publicly on several occasions that the purported AI Apocalypse that some people seem to be worried about is extremely unlikely to happen, and if there were any risk of it happening, it wouldn’t be for another few decades in the future. Making robots that “take over the world”, Terminator style, even if we had the technology. would require a conjunction of many stupid engineering mistakes and ridiculously bad design, combined with zero regards for safety. Sort of like building a car, not just without safety belts, but also a 1000 HP engine that you can’t turn off and no brakes.

But since some people seem to be worried about it, here is an idea to reassure them: We are, even today, pretty good at building machines that have super-human intelligence for very narrow domains. You can buy a $30 toy that will beat you at chess. We have systems that can recognize obscure species of plants or breeds of dogs, systems that can answer Joepardy questions and play Go better than most humans, we can build systems that can recognize a face among millions, and your car will soon drive itself better than you can drive it. What we don’t know how to build is an artificial general intelligence (AGI). To take over the world, you would need an AGI that was specifically designed to be malevolent and unstoppable. In the unlikely event that someone builds such a malevolent AGI, what we merely need to do is build a “Narrow” AI (a specialized AI) whose only expertise and purpose is to destroy the nasty AGI. It will be much better at this than the AGI will be at defending itself against it, assuming they both have access to the same computational resources. The narrow AI will devote all its power to this one goal, while the evil AGI will have to spend some of its resources on taking over the world, or whatever it is that evil AGIs are supposed to do. Checkmate.
</not_being_really_serious>

Since LeCun has stated his skepticism about potential risks from advanced artificial intelligence in the past, I assume his “not being really serious” is meant to refer to his proposed narrow AI vs. AGI “solution,” not to his comments about risks from AGI. So, I’ll reply to his comments on risks from AGI and ignore his “not being really serious” comments about narrow AI vs. AGI.

First, LeCun says:

if there were any risk of [an “AI apocalypse”], it wouldn’t be for another few decades in the future

Yes, that’s probably right, and that’s what people like myself (former Executive Director of MIRI) and Nick Bostrom (author of Superintelligence, director of FHI) have been saying all along, as I explained here. But LeCun phrases this as though he’s disagreeing with someone.

Second, LeCun writes as though the thing people are concerned about is a malevolent AGI, even though I don’t know anyone is concerned about malevolent AI. The concern expressed in Superintelligence and elsewhere isn’t about AI malevolence, it’s about convergent instrumental goals that are incidentally harmful to human society. Or as AI scientist Stuart Russell put it:

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.  This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

What are the best wireless earbuds?

I listen to music >10 hrs per day, and I love the convenience of wireless earbuds. They are tiny and portable, and I can do all kinds of stuff — work on something with my hands, take on/off my jacket or my messenger bag, etc. — without getting tangled up in a cord.

So which wireless earbuds are the best? For this kind of thing I always turn first to The Wirecutter, which publishes detailed investigations of consumer products, like Consumer Reports but free and often more up-to-date.

I bought their recommended wireless earbuds a while back, when their recommendation was the Jaybird Bluebuds X. After several months I lost that pair and bought the new Wirecutter recommendation, the JLab Epic Bluetooth. Those were terrible so I returned them and bought the now-available Jaybird X2, which has been awesome so far.

So long as a pair of wireless earbuds have decent sound quality and >6 hrs battery life, the most important thing to me is low frequency of audio cutting.

See, Bluetooth is a very weak kind of signal. It can’t really pass through your body, for example. That’s why it uses so little battery power, which is important for tiny things like wireless earbuds. As a result, I got fairly frequent audio cutting when trying to play music from my phone in my pants pocket to my Jaybird Bluebuds X. After some experimentation, I learned that audio cutting was less frequent if my phone was in my rear pocket, on the same side of my body as the earbuds’ Bluetooth receiver. But it still cut out maybe an average of 200 times an hour (mostly concentrated in particularly frustrating 10-minute periods with lots of cutting).

When I lost that pair and got the JLab Epic Bluetooth, I hoped that with the newer pair they’d have figured out some extra tricks to reduce audio cutting. Instead, the audio cutting was terrible. Even with my phone in the optimal pants pocket, there was usually near-constant audio cutting, maybe about 2000 times an hour on average. Moreover, when I used them while reclining in bed, I would get lots of audio cutting whenever my neck was pressed up against my pillow! So, pretty useless. I returned them to Amazon for a refund.

I replaced this pair with The Wirecutter’s 2nd choice, the Jaybird X2. So far these have been fantastic. In my first ~15 hours of using them I’ve gotten exactly two split-second audio cuts.

So if you want to make the leap to wireless earbuds, I recommend the Jaybird X2. Though if you don’t mind waiting, the Jaybird X3 and Jaybird Freedom are both coming out this spring, and they might be even better.

One final note: I got my last two pairs of wireless earbuds in white so that others can see I’m wearing them. With my original black Bluebuds X, people would sometimes talk at me for >30 seconds without realizing I couldn’t hear them because I had music in my ears.

MarginNote: the only iPhone app that lets you annotate both PDFs and epub files

As far as I can tell, MarginNote is the only iPhone app that lets you annotate & highlight both PDFs and epub files, and sync those annotations to your computer. And by “PDFs and epub files” I basically mean “all text files,” since Calibre and other apps can convert any text file into an epub, except for PDFs with tables and images. (The Kindle iPhone app can annotate text files, but can’t sync those annotations anywhere unless you bought the text directly from Amazon.)

This is important for people who like to read nonfiction “on the go,” like me — and plausibly some of my readers, so I figured I’d share my discovery.

When will videogame writing improve?

The best plays and films have had great writing for a long time. The best TV shows have had great writing for about a decade now. But the writing in the best videogames is still cringe-inducingly awful. This is despite the fact that videogame blockbusters regularly have production budgets of $50M or more. When will videogames hit their “golden age” (at least, for writing)?

My favorite kind of music

I think I’ve finally realized that I have a favorite kind of music, though unfortunately it doesn’t have a genre name, and it cuts across many major musical traditions — Western classical, jazz, rock, electronica, and possibly others.

I tend to love music that:

  1. Is primarily tonal but uses dissonance for effective contrast. (The Beatles are too tonal; Arnold Schoenberg and Cecil Taylor are too atonal; Igor Stravinsky and Charles Mingus are just right.)
  2. Obsessively composed, though potentially with substantial improvisation within the obsessively composed structure. (Coleman’s Free Jazz is too free. Amogh Symphony’s Vectorscan is innovative and complex but doesn’t sound like they tried very hard to get the compositional details right. The Rite of Spring and Chiastic Slide and even Karma are great.)
  3. Tries to be as emotionally affecting as possible, though this may include passages of contrastingly less-emotional music. (Anthony Braxton and Brian Ferneyhough are too cold and anti-emotional. Rich Woodson shifts around too quickly to ever build up much emotional “momentum.” Master of Puppets and Escalator Over the Hill and Tabula Rasa are great.)
  4. Is boredom-resistant by being fairly complex or by being long and subtly-evolving enough that I don’t get bored of it quickly. (The Beatles are too short and simple — yes, including their later work. The Soft Machine is satisfyingly complex and varied. The minimalists and Godspeed! You Black Emperor are often simple and repetitive, but their pieces are long enough and subtly-evolving enough that I don’t get bored of them.)

Property #2, I should mention, is pretty similar to Holden Karnofsky’s notion of “awe-inspiring” music. Via email, he explained:

One of the emotions I would like to experience is awe … A piece of music might be great because the artists got lucky and captured a moment, or because it’s just so insane that I can’t find anything else like it, or because I have an understanding that it was the first thing ever to do X, or because it just has that one weird sound that is so cool, but none of those make me go “Wow, this artist is awesome. I am in awe of them. I feel like the best parts of this are things they did on purpose, by thinking of them, by a combination of intelligence and sweat that makes me want to give them a high five. I really respect them for their achievement. I feel like if I had done this I would feel true pride that I had used the full extent of my abilities to do something that really required them.”

It’s no accident that most of the things that do this for me are “epic” in some way and usually took at least a solid year of someone’s life, if not 20 years, to create.

To illustrate further what I mean by each property, here’s how I would rate several musical works on each property:

Tonal w/ dissonance? Obsessively composed? Highly emotional? Boredom-resistant?
Mingus, The Black Saint and the Sinner Lady Yes Yes Yes Yes, complex
Stravinsky, The Rite of Spring Yes Yes Yes Yes, complex
The Soft Machine, Third Yes Yes Yes Yes, complex
Schulze, Irrlicht Yes I think so? Yes Yes, slowly-evolving
Adams, Harmonielehre Yes Yes Yes Yes, complex
The Beatles, Sgt. Pepper Not enough dissonance Yes Yes No
Coleman, Free Jazz Yes Not really Sometimes Yes, complex
Amogh Symphony, Vectorscan Yes Not really Yes Yes, complex
Stockhausen, Licht cycle Too dissonant Yes Not often Yes, complex
Autechre, Chiastic Slide Yes Yes Yes Yes, complex
Anthony Braxton, For Four Orchestras Too dissonant Yes No Yes, complex

 

Musical shiver moments, 2015 edition

Back in 2004, I wrote a list of (what I now call) “musical shiver moments.” A musical shiver moment is a moment in a musical track that hits you with special emotional force (perhaps sending a shiver down your spine). It can be the climax of a pop song, or the beginning of a catchy riff, or a particularly well-conceived mood shift, etc.

A classic example is the moment the drums finally enter in Phil Collins’ “In the Air Tonight.” Another is the chord shift for the final performance of the chorus in Whitney Houston’s “I Will Always Love You.”

(Note that for most of these shiver moments to have their impact, you need to listen to all or most of the track up to that point, first. You can’t just jump right to the shiver moment.)

It’s been over a decade since I made my original list. Here are a few more I’ve discovered since then:

  • “Solo begins” – Carla Bley – Escalator Over the Hill: Hotel Overture – 7:45
  • “The world crumbles” – Arvo Pärt – Tabula Rasa: Ludus – 7:20
  • “I knew nothing of the horses” – Scott Walker – Tilt: Farmer in the City – 5:22
  • “The riff enters” – Justice – Cross: Genesis – 0:38
  • “Desperate cry” – Osvaldo Golijov – The Dreams and Prayers of Isaac the Blind: Agitato – 7:00
  • “Sudden slices” – Klaus Schulze – Irrlicht: Satz Ebene – 9:30
  • “The theme enters” – John Adams – Grand Pianola Music: On the Great Divide – 2:20
  • “Swelling” – M83 – Hurry Up We’re Dreaming: My Tears Are Becoming a Sea – 1:11
  • “The sweet” – Anna von Hausswolff – Ceremony: Red Sun – 2:10
  • “Drums enter” – The Shining – In the Kingdom of Kitsch You Will Be a Monster: Goretex Weather Report – 1:05
  • “Entrance” – Ryan Power – Identity Picks: Sweetheart – 0:05
  • “Tone added” – Jon Hopkins – Immunity: We Disappear – 2:20
  • “Verse 2 begins” – The Fiery Furnaces – EP: Here Comes the Summer – 1:30
  • “Tonight” – Frank Ocean – channel ORANGE: Pyramids – 5:22
  • “Electronic instruments solo” – James Blake – James Blake: I Never Learnt to Share – 3:40
  • “Guitar solo peaks” – Janelle Monae – The ArchAndroid: Cold War – 2:11
  • “Surprising transition” – Kanye West – My Beautiful Dark Twisted Fantasy: Lost in the World – 0:59
  • “Soprano rising” – Henryk Górecki – Symphony No. 3: 1st movement – 15:57
  • “New instrument enters” – Fuck Buttons – Tarot Sport: Surf Solar – 5:18
  • “Into the final stretch” – Lindstrøm – Where You Go I Go Too: Where You Go I Go Too – 22:46
  • “New instrument” – Modeselektor – Happy Birthday!: Sucker Pin – 3:10
  • “Rising” – Glasvegas – Glasvegas: Ice Cream Van – 3:30
  • “Quiet after the storm” – Howard Shore – The Fellowship of the Ring: The Bridge of Khazad Dum – 4:57
  • “Finale” – John Adams – Harmonielehre: Part I – 17:01
  • “Chorus” – Phantom Planet – Phantom Planet: Knowitall – 1:06
  • “Suddenly, a groove” – Herbie Hancock – Crossings: Sleeping Giant – 11:09
  • “You thought this track couldn’t get any more epic. You were wrong.” – Godspeed You! Black Emperor – Allelujah! Don’t Bend! Ascend!: We Drift Like Worried Fire – 18:48
  • “One of my favorite melodies, 2nd time” – Jean Sibelius – Symphony No. 5: 1st movement – 1:55

(The time markings for the classical pieces will be off for some performances/recordings, naturally.)

What are some of your musical shiver moments?

Three types of nonfiction books I read

I realized recently that when I want to learn about a subject, I mentally group the available books into three categories.

I’ll call the first category “convincing.” This is the most useful kind of book for me to read on a topic, but for most topics, no such book exists. Many basic textbooks on the “hard” sciences (e.g. “settled” physics and chemistry) and the “formal” sciences (e.g. “settled” math, statistics, and computer science) count. In the softer sciences (including e.g. history), I know of very few books with the intellectual honesty and epistemic rigor to be convincing (to me) on their own. David Roodman’s book on microfinance, Due Diligence, is the only example that comes to mind as I write this.

Don’t get me wrong: I think we can learn a lot from studying softer sciences, but rarely is a single book on the softer sciences written in such a way as to be convincing to me, unless I know the topic well already.

I think of my 2nd category as “raw data.” These books make a good case that the data they present were collected and presented in a fairly reasonable way, and I find it useful to know what the raw data are, but if and when the book attempts to persuade me of non-obvious causal hypotheses, I find the book illuminating but unconvincing (on its own). Some examples:

Finally, my 3rd category for nonfiction is “food for thought.” Besides being unconvincing about non-obvious causal inferences, these books also fail to make a good case that the data supporting their arguments were collected and presented in a reasonable way. So what I get from them is just some basic terminology, and some hypotheses and arguments and stories I didn’t know about before. This category includes the vast majority of all non-fiction, e.g.:

My guess is that I’m more skeptical than most heavy readers of non-fiction, including most scientists. I’m sure I’ll blog more in the future about why.

Some 2016 movies I’m looking forward to

I’m only counting films to be first released in 2016 according to IMDB. In descending order of how confident I am that I’ll rate it as “really liked” or “loved”:

  1. Coen brothers, Hail, Caesar!
  2. Stanton, Finding Dory
  3. Linklater, Everybody Wants Some
  4. Nichols, Midnight Special
  5. Villeneuve, Story of Your Life
  6. Dardenne brothers, The Unknown Girl
  7. Farhadi, Seller
  8. Scorsese, Silence

For all other movies coming out in 2016 that I’ve seen mentioned, I’m <70% confident I’ll rate them as “really liked” or “loved.”

One of my favorite melodies, rediscovered

There are about a dozen melodies that I find myself humming and whistling without realizing it. One is “Yellow Submarine.” Another begins at about 2:20 in Adams’ “On the Dominant Divide.”

Another is one that I’ve been humming for years but couldn’t remember where I had heard it.

Well, today, I finally stumbled into that melody once again! It turns out it’s the melody that begins at about 1:09 into the 3rd movement of Sibelius’ 5th symphony.

Ahhhhhhhh. So good.

If you’re an “AI safety lurker,” now would be a good time to de-lurk

Recently, the study of potential risks from advanced artificial intelligence has attracted substantial new funding, prompting new job openings at e.g. Oxford University and (in the near future) at Cambridge University, Imperial College London, and UC Berkeley.

This is the dawn of a new field. It’s important to fill these roles with strong candidates. The trouble is, it’s hard to find strong candidates at the dawn of a new field, because universities haven’t yet begun to train a steady flow of new experts on the topic. There is no “long-term AI safety” program for graduate students anywhere in the world.

Right now the field is pretty small, and the people I’ve spoken to (including e.g. at Oxford) seem to agree that it will be a challenge to fill these roles with candidates they already know about. Oxford has already re-posted one position, because no suitable candidates were found via the original posting.

So if you’ve developed some informal expertise on the topic — e.g. by reading books, papers, and online discussions — but you are not already known to the folks at Oxford, Cambridge, FLI, or MIRI, now would be an especially good time to de-lurk and say “I don’t know whether I’m qualified to help, and I’m not sure there’s a package of salary, benefits, and reasons that would tempt me away from what I’m doing now, but I want to at least let you know that I exist, I care about this issue, and I have at least some relevant skills and knowledge.”

Maybe you’ll turn out not to be a good candidate for any of these roles. Maybe you’ll learn the details and decide you’re not interested. But if you don’t let us know you exist, neither of those things can even begin to happen, and these important roles at the dawn of a new field will be less likely to be filled with strong candidates.

I’m especially passionate about de-lurking of this sort because when I first learned about MIRI, I just assumed I wasn’t qualified to help out, and wouldn’t want to, anyway. But after speaking to some folks at MIRI, it turned out I really could help out, and I’m glad I did. (I was MIRI’s Executive Director for ~3.5 years.)

So if you’ve been reading and thinking about long-term AI safety issues for a while now, and you have some expertise in computer science, AI, analytic/formal philosophy, mathematics, statistics, policy, risk analysis, forecasting, or economics, and you’re not already in contact with the people at the organizations I named above, please step forward and tell us you exist.

UPDATE Jan. 2, 2016: At this point in the original post, I recommended that people de-lurk by emailing me or by commenting below. However, I was contacted by far more people than I expected (100+), so I had to reply to everyone (on Dec. 19th) with a form email instead. In that email I thanked everyone for contacting me as I had requested, apologized for not being able to respond individually, and made the following request:

If you think you might be interested in a job related to long-term AI safety either now or in the next couple years, please fill out this 3-question Google form, which is a lot easier than filling out any of the posted job applications. This will make it much easier for the groups that are hiring to skim through your information and decide which people they want to contact and learn more about.

Everyone who contacted/contacts me after Dec. 19th will instead receive a link to this section of this blog post. If I’ve linked you here, please consider filling out the 3-question Google form above.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

My biggest complaint about Last.fm

I use last.fm to track what music I listen to. Unfortunately, it’s not very accurate.

The first problem is that it doesn’t track music listened to on most online services (e.g. Youtube, Bandcamp). But I can’t really complain about that, since I just discovered there’s an app for that. Though, its Youtube support is shaky, I assume because it’s hard to tell what’s a non-musical video and what is a music track.

A bigger problem for me is that last.fm counts up what I listen to by counting tracks played rather than by counting time played. So if I listen to a punk band for one hour, and then I listen to Miles Davis for one hour, last.fm will make it look as though I like the punk band 10x more than I like Miles Davis, because the punk band writes 3 minute tracks and Miles Davis records 30 minute tracks.

A comparison of Mac and cloud programs for PDF rich text extraction

I like reading things via the Kindle app on my phone, because then I can read from anywhere. Unfortunately, most of what I want to read is in PDF format, so the text can’t “reflow” on my phone’s small screen like a normal ebook does. PDF text extraction programs aim to solve this problem by extracting the text (and in some cases, other elements) from a PDF and exporting it to a format that allows text reflow, for example .docx or .epub.

Which PDF text extraction program is best? I couldn’t find any credible comparisons, so I decided to do my own.

My criteria for were:

  1. The program must run on Mac OS X or run in the cloud.
  2. It must be free or have a free trial available, so I can run this test without spending hundreds of dollars.
  3. It must be easy to use. If I have to install special packages or tweak environment variables to run the program, it doesn’t qualify.
  4. It must preserve images, tables, and equations amidst the text, since the documents I want to read often include important charts, tables, and equations. (It’s fine if equations and tables are simply handled as images.)
  5. It must be able to handle multi-column pages.
  6. It must work with English, but I don’t care about other languages because I can’t read them anyway.
  7. I don’t care that much about final file size or how long the conversion takes, so long as the program doesn’t crash on 1 out of every 10 attempts and doesn’t create crazy 200mb files or something like that.

To run my test, I assembled a gauntlet of 16 PDFs of the sort I often read, including several PDFs from journal websites, a paper from arXiv, and multiple scanned-and-OCRed academic book chapters.

A quick search turned up way too many Mac or cloud-based programs to test, so I decided to focus on a few that were from major companies or were particularly easy to use.

[Read more…]