The silly history of spinach

From Arbesman’s The Half-Life of Facts:

One of the strangest examples of the spread of error is related to an article in the British Medical Journal from 1981. In it, the immunohematologist Terry Hamblin discusses incorrect medical information, including a wonderful story about spinach. He details how, due to a typo, the amount of iron in spinach was thought to be ten times higher than it actually is. While there are only 3.5 milligrams of iron in a 100-gram serving of spinach, the accepted fact became that spinach contained 35 milligrams of iron. Hamblin argues that German scientists debunked this in the 1930s, but the misinformation continued to spread far and wide.

According to Hamblin, the spread of this mistake even led to spinach becoming Popeye the Sailor’s food choice. When Popeye was created, it was recommended he eat spinach for his strength, due to its vaunted iron-based health properties.

This wonderful case of a typo that led to so much incorrect thinking was taken up for decades as a delightful, and somewhat paradigmatic, example of how wrong information could spread widely. The trouble is, the story itself isn’t correct.

While the amount of iron in spinach did seem to be incorrectly reported in the nineteenth century, it was likely due to a confusion between iron oxide—a related chemical—and iron, or contamination in the experiments, rather than a typographical error. The error was corrected relatively rapidly, over the course of years, rather than over many decades.

Mike Sutton, a reader in criminology at Nottingham Trent University, debunked the entire original story several years ago through a careful examination of the literature. He even discovered that Popeye seems to have eaten spinach not for its supposed high quantities of iron, but rather due to vitamin A. While the truth behind the myth is still being excavated, this misinformation — the myth of the error — from over thirty years ago continues to spread.

Time to proof for well-specified problems

How much time usually elapses between when a technical problem is posed and when it is solved? How much effort is usually required? Which variables most predict how much time and effort will be required to solve a technical problem?

The main paper I’ve seen on this is Hisano & Sornette (2013).1 Their method was to start with Wikipedia’s List of conjectures and then track down the year each conjecture was first stated and the year it was solved (or, whether it remains unsolved). They were unable to determine exact-year values for 16 conjectures, leaving them with a dataset of 144 conjectures, of which 60 were solved as of January 2012, with 84 still unsolved. The time between first conjecture statement and first solution is called “time to proof.”

For the purposes of finding possible data-generating models that fit the data described above, they assume the average productivity per mathematician is constant throughout their career (they didn’t try to collect more specific data), and they assume the number of active mathematicians tracks with total human population — i.e., roughly exponential growth over the time period covered by these conjectures and proofs (because again, they didn’t try to collect more specific data).

I didn’t try to understand in detail how their model works or how reasonable it is, but as far as I understand it, here’s what they found:

  • Since 1850, the number of new conjectures (that ended up being listed on Wikipedia) has tripled every 55 years. This is close to the average growth rate of total human population over the same time period.
  • Given the incompleteness of the data and the (assumed) approximate exponential growth of the mathematician population, they can’t say anything confident about the data-generating model, and therefore basically fall back on Occam: “we could not reject the simplest model of an exponential rate of conjecture proof with a rate of 0.01/year for the dataset (translating into an average waiting time to proof of 100 years).”
  • They expect the Wikipedia dataset severely undersamples “the many conjectures whose time-to-proof is in the range of years to a few decades.”
  • They use their model to answer the question that prompted the paper, which was about the probability that “P vs. NP” will be solved by 2024. Their model says there’s a 41.3% chance of that, which intuitively seems high to me.
  • They make some obvious caveats to all this: (1) the content of the conjecture matters for how many mathematician-hours are devoted to solving it, and how quickly they are devoted; (2) to at least a small degree, the notion of “proof” has shifted over time, e.g. the first proof of the four-color theorem still has not been checked from start to finish by humans, and is mostly just assumed to be correct; (3) some famous conjectures might be undecidable, leaving some probability mass for time-to-proof at infinity.

What can we conclude from this?

Not much. Sometimes crisply-posed technical problems are solved quickly, sometimes they take many years or decades to solve, sometimes they take more than a century to solve, and sometimes they are never solved, even with substantial effort being targeted at the problem.2

And unfortunately, it looks like we can’t say much more than that from this study alone. As they say, their observed distribution of time to proof must be considered with major caveats. Personally, I would emphasize the likely severe undersampling of conjectures with short times-to-proof, the fact that they didn’t try to weight data points by how important the conjectures were perceived to be or how many resources went into solving them (because doing so would be very hard!), and the fact that they didn’t have enough data points (especially given the non-stationary number of mathematicians) to confirm or reject ~any of the intuitively / a priori plausible data-generating models.

Are there other good articles3 on “time to proof” or “time to solution” for relatively well-specified research problems, in mathematics or other fields? If you know of any, please let me know!

  1. Slightly different arxiv version here. []
  2. This “substantial effort” claim isn’t in the paper, but I’m pretty sure it’s true for many of the conjectures, including many of those with time to proof of >10 years). []
  3. Besides the few that Hisano & Sornette cite, which I think are basically superceded by Hisano & Sornette. []

Geoff Hinton on long-term AI outcomes

Geoff Hinton on a show called The Agenda (starting around 9:40):

Interviewer: How many years away do you think we are from a neural network being able to do anything that a brain can do?

Hinton: …I don’t think it will happen in the next five years but beyond that it’s all a kind of fog.

Interviewer: Is there anything about this that makes you nervous?

Hinton: In the very long run, yes. I mean obviously having… [AIs] more intelligent than us is something to be nervous about. It’s not gonna happen for a long time but it is something to be nervous about.

Interviewer: What aspect of it makes you nervous?

Hinton: Will they be nice to us?

Some books I’m looking forward to, March 2016 edition

* = added this round

Books, music, etc. from February 2016



Music I most enjoyed discovering this month:


Ones I really liked, or loved:

If you want to write about intelligence explosion…

Toby Walsh has published a short new paper on the likelihood of intelligence explosion. Unfortunately, it doesn’t engage with three of the most detailed and thoughtful previous analyses on the topic.

If you want to write about the likelihood and nature of intelligence explosion, I consider the following sources required reading, in descending order of value per page (Walsh’s paper misses 2, 3, and 5):

  1. Bostrom (2014), chapter 4
  2. Yudkowsky (2013)
  3. AI Impacts‘ posts on intelligence explosion: one, two (both 2015)
  4. Chalmers (2010)
  5. Hanson & Yudkowsky (2013)

There are many other sources worth reading, e.g. Hutter (2012), but they don’t make my cut as “required reading.”

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Bill Gates on AI timelines

On the latest episode of The Ezra Klein Show, Bill Gates elaborated a bit on his views about AI timelines (starting around 24:40):

Klein: I know you take… the risk of creating artificial intelligence that… ends up turning against us pretty seriously. I’m curious where you think we are in terms of creating an artificial intelligence…

Gates: Well, with robotics you have to think of three different milestones.

One is… not-highly-trained labor substitution. Driving, security guard, warehouse work, waiter, maid — things that are largely visual and physical manipulation… [for] that threshold I don’t think you’d get much disagreement that over the next 15 years that the robotic equivalents in terms of cost [and] reliability will become a substitute to those activities…

Then there’s the point at which what we think of as intelligent activities, like writing contracts or doing diagnosis or writing software code, when will the computer start to… have the capacity to work in those areas? There you’d get more disagreement… some would say 30 years, I’d be there. Some would say 60 years. Some might not even see that [happening].

Then there’s a third threshold where the intelligence involved is dramatically better than humanity as a whole, what Bostrom called a “superintelligence.” There you’re gonna get a huge range of views including people who say it won’t ever happen. Ray Kurzweil says it will happen at midnight on July 13, 2045 or something like that and that it’ll all be good. Then you have other people who say it can never happen. Then… there’s a group that I’m more among where you say… we’re not able to predict it, but it’s something that should start thinking about. We shouldn’t restrict activities or slow things down… [but] the potential that that exists even in a 50-year timeframe [means] it’s something to be taken seriously.

But those are different thresholds, and the responses are different.

See Gates’ previous comments on AI timelines and AI risk, here.

UPDATE 07/01/2016: In this video, Gates says that achieving “human-level” AI will take “at least 5 times as long as what Ray Kurzweil says.”

Reply to LeCun on AI safety

On Facebook, AI scientist Yann LeCun recently posted the following:

I have said publicly on several occasions that the purported AI Apocalypse that some people seem to be worried about is extremely unlikely to happen, and if there were any risk of it happening, it wouldn’t be for another few decades in the future. Making robots that “take over the world”, Terminator style, even if we had the technology. would require a conjunction of many stupid engineering mistakes and ridiculously bad design, combined with zero regards for safety. Sort of like building a car, not just without safety belts, but also a 1000 HP engine that you can’t turn off and no brakes.

But since some people seem to be worried about it, here is an idea to reassure them: We are, even today, pretty good at building machines that have super-human intelligence for very narrow domains. You can buy a $30 toy that will beat you at chess. We have systems that can recognize obscure species of plants or breeds of dogs, systems that can answer Joepardy questions and play Go better than most humans, we can build systems that can recognize a face among millions, and your car will soon drive itself better than you can drive it. What we don’t know how to build is an artificial general intelligence (AGI). To take over the world, you would need an AGI that was specifically designed to be malevolent and unstoppable. In the unlikely event that someone builds such a malevolent AGI, what we merely need to do is build a “Narrow” AI (a specialized AI) whose only expertise and purpose is to destroy the nasty AGI. It will be much better at this than the AGI will be at defending itself against it, assuming they both have access to the same computational resources. The narrow AI will devote all its power to this one goal, while the evil AGI will have to spend some of its resources on taking over the world, or whatever it is that evil AGIs are supposed to do. Checkmate.

Since LeCun has stated his skepticism about potential risks from advanced artificial intelligence in the past, I assume his “not being really serious” is meant to refer to his proposed narrow AI vs. AGI “solution,” not to his comments about risks from AGI. So, I’ll reply to his comments on risks from AGI and ignore his “not being really serious” comments about narrow AI vs. AGI.

First, LeCun says:

if there were any risk of [an “AI apocalypse”], it wouldn’t be for another few decades in the future

Yes, that’s probably right, and that’s what people like myself (former Executive Director of MIRI) and Nick Bostrom (author of Superintelligence, director of FHI) have been saying all along, as I explained here. But LeCun phrases this as though he’s disagreeing with someone.

Second, LeCun writes as though the thing people are concerned about is a malevolent AGI, even though I don’t know anyone is concerned about malevolent AI. The concern expressed in Superintelligence and elsewhere isn’t about AI malevolence, it’s about convergent instrumental goals that are incidentally harmful to human society. Or as AI scientist Stuart Russell put it:

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.  This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Books, music, etc. from January 2016



Music I most enjoyed discovering this month:


Ones I really liked, or loved:

  1. But also see e.g. these details. []

Some books I’m looking forward to, February 2016 edition

* = added this round

Sutskever on Talking Machines

The latest episode of Talking Machines features an interview with Ilya Sutskever, the research director at OpenAI. His comments on long-term AI safety in particular were (starting around 28:10):

Interviewer: There’s a part of [OpenAI’s introductory blog post] that I found particularly interesting, which says “It’s hard to fathom It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.” So what are the reasonable questions that we should be thinking about in terms of safety now? …

Sutskever: … I think, and many people think, that full human-level AI … might perhaps be invented in some number of decades … [and] will obviously have a huge, inconceivable impact on society. That’s obvious. And when a technology will predictably have as much impact, there is nothing to lose from starting to think about the nature of this impact … and also whether there is any research that can be done today that will make this impact be more like the kind of impact we want.

The question of safety really boils down to this: …If you look at our neural networks that for example recognize images, they’re doing a pretty good job but once in a while they make errors [and it’s] hard to understand where they come from.

For example I use Google photo search to index my own photos… and it’s really accurate almost all the time, but sometimes I’ll search for a photo of a dog, let’s say, and it will find a photo [that is] clearly not a dog. Why does it make this mistake? You could say “Who cares? It’s just object recognition,” and I agree. But if you look down the line, what you’ll see is that right now we are [just beginning to] create agents, for example the Atari work of DeepMind or the robotics work of Berkeley, where you’re building a neural network that learns to control something which interacts with the world. At present, their cost functions [i.e. goal functions] are manually specified. But it… seems likely that eventually we will be building robots whose cost functions will be learned from demonstration, or from watching a YouTube video, or from the interpretation of natural text…

So now you have these really complicated cost functions that are difficult to understand, and you have a physical robot or some kind of software system which tries to optimize this cost function, and I think these are the kinds of scenarios that could be relevant for AI safety questions. Once you have a system like this, what do you need to do to be reasonably certain that it will do what you want it to do?

…because we don’t work on such systems [today], these questions may seem a bit premature, but once we start building reinforcement learning systems [which] do learn the cost function, I think this question will become much more sharply in focus. Of course it would also be nice to do theoretical research, but it’s not clear to me how it could be done.

Interviewer: So right now we have the opportunity to understand the fundamentals… and then apply them later as the research continues and grows and is able to create more powerful systems?

Sutskever: That would be the ideal case, definitely. I think it’s worth trying to do that. I think it may also be hard to do because it seems like we have such a hard time imagining [what] these future systems will look like. We can speak in general terms: Yes, there will be a cost function most likely. But how, exactly, will it be optimized? It’s a little hard to predict because if you could predict it we could just go ahead and build the systems already.

What are the best wireless earbuds?

I listen to music >10 hrs per day, and I love the convenience of wireless earbuds. They are tiny and portable, and I can do all kinds of stuff — work on something with my hands, take on/off my jacket or my messenger bag, etc. — without getting tangled up in a cord.

So which wireless earbuds are the best? For this kind of thing I always turn first to The Wirecutter, which publishes detailed investigations of consumer products, like Consumer Reports but free and often more up-to-date.

I bought their recommended wireless earbuds a while back, when their recommendation was the Jaybird Bluebuds X. After several months I lost that pair and bought the new Wirecutter recommendation, the JLab Epic Bluetooth. Those were terrible so I returned them and bought the now-available Jaybird X2, which has been awesome so far.

So long as a pair of wireless earbuds have decent sound quality and >6 hrs battery life, the most important thing to me is low frequency of audio cutting.

See, Bluetooth is a very weak kind of signal. It can’t really pass through your body, for example. That’s why it uses so little battery power, which is important for tiny things like wireless earbuds. As a result, I got fairly frequent audio cutting when trying to play music from my phone in my pants pocket to my Jaybird Bluebuds X. After some experimentation, I learned that audio cutting was less frequent if my phone was in my rear pocket, on the same side of my body as the earbuds’ Bluetooth receiver. But it still cut out maybe an average of 200 times an hour (mostly concentrated in particularly frustrating 10-minute periods with lots of cutting).

When I lost that pair and got the JLab Epic Bluetooth, I hoped that with the newer pair they’d have figured out some extra tricks to reduce audio cutting. Instead, the audio cutting was terrible. Even with my phone in the optimal pants pocket, there was usually near-constant audio cutting, maybe about 2000 times an hour on average. Moreover, when I used them while reclining in bed, I would get lots of audio cutting whenever my neck was pressed up against my pillow! So, pretty useless. I returned them to Amazon for a refund.

I replaced this pair with The Wirecutter’s 2nd choice, the Jaybird X2. So far these have been fantastic. In my first ~15 hours of using them I’ve gotten exactly two split-second audio cuts.

So if you want to make the leap to wireless earbuds, I recommend the Jaybird X2. Though if you don’t mind waiting, the Jaybird X3 and Jaybird Freedom are both coming out this spring, and they might be even better.

One final note: I got my last two pairs of wireless earbuds in white so that others can see I’m wearing them. With my original black Bluebuds X, people would sometimes talk at me for >30 seconds without realizing I couldn’t hear them because I had music in my ears.

MarginNote: the only iPhone app that lets you annotate both PDFs and epub files

As far as I can tell, MarginNote is the only iPhone app that lets you annotate & highlight both PDFs and epub files, and sync those annotations to your computer. And by “PDFs and epub files” I basically mean “all text files,” since Calibre and other apps can convert any text file into an epub, except for PDFs with tables and images. (The Kindle iPhone app can annotate text files, but can’t sync those annotations anywhere unless you bought the text directly from Amazon.)

This is important for people who like to read nonfiction “on the go,” like me — and plausibly some of my readers, so I figured I’d share my discovery.

The most-acclaimed new classical music of 2015

Barney Sherman’s 2015 classical music mega-meta-list is now up (part 1part 2), pulling together the results of 64 different “best of the 2015” lists from classical music critics.

Most of the selections are new performances of older works. Here, I want to highlight the contemporary classical pieces that were recorded for the first time in 2015 (and usually composed in the last few years), and that were included in 6 or more of the lists in Sherman’s analysis:

  1. Anna Thorvaldsdottir: In the Light of Air
  2. Andrew Norman: Play / Try
  3. Julia Wolfe: Anthracite Fields
  4. Various composers: Render
  5. Various composers: Clockworking
  6. John Luther Adams: The Wind in High Places
  7. John Adams: Absolute Jest

The links go to Spotify. If you’re relatively new to contemporary classical music, my guess is that Absolute Jest is the most widely accessible selection here, followed by Render.

When will videogame writing improve?

The best plays and films have had great writing for a long time. The best TV shows have had great writing for about a decade now. But the writing in the best videogames is still cringe-inducingly awful. This is despite the fact that videogame blockbusters regularly have production budgets of $50M or more. When will videogames hit their “golden age” (at least, for writing)?

My favorite kind of music

I think I’ve finally realized that I have a favorite kind of music, though unfortunately it doesn’t have a genre name, and it cuts across many major musical traditions — Western classical, jazz, rock, electronica, and possibly others.1

I tend to love music that:2

  1. Is primarily tonal but uses dissonance for effective contrast. (The Beatles are too tonal; Arnold Schoenberg and Cecil Taylor are too atonal; Igor Stravinsky and Charles Mingus are just right.)
  2. Obsessively composed, though potentially with substantial improvisation within the obsessively composed structure. (Coleman’s Free Jazz is too free. Amogh Symphony’s Vectorscan is innovative and complex but doesn’t sound like they tried very hard to get the compositional details right. The Rite of Spring and Chiastic Slide and even Karma are great.)
  3. Tries to be as emotionally affecting as possible, though this may include passages of contrastingly less-emotional music. (Anthony Braxton and Brian Ferneyhough are too cold and anti-emotional. Rich Woodson shifts around too quickly to ever build up much emotional “momentum.” Master of Puppets and Escalator Over the Hill and Tabula Rasa are great.)
  4. Is boredom-resistant by being fairly complex or by being long and subtly-evolving enough that I don’t get bored of it quickly. (The Beatles are too short and simple — yes, including their later work. The Soft Machine is satisfyingly complex and varied. The minimalists and Godspeed! You Black Emperor are often simple and repetitive, but their pieces are long enough and subtly-evolving enough that I don’t get bored of them.)

Property #2, I should mention, is pretty similar to Holden Karnofsky’s notion of “awe-inspiring” music. Via email, he explained:

One of the emotions I would like to experience is awe … A piece of music might be great because the artists got lucky and captured a moment, or because it’s just so insane that I can’t find anything else like it, or because I have an understanding that it was the first thing ever to do X, or because it just has that one weird sound that is so cool, but none of those make me go “Wow, this artist is awesome. I am in awe of them. I feel like the best parts of this are things they did on purpose, by thinking of them, by a combination of intelligence and sweat that makes me want to give them a high five. I really respect them for their achievement. I feel like if I had done this I would feel true pride that I had used the full extent of my abilities to do something that really required them.”

It’s no accident that most of the things that do this for me are “epic” in some way and usually took at least a solid year of someone’s life, if not 20 years, to create.

To illustrate further what I mean by each property, here’s how I would rate several musical works on each property:

Tonal w/ dissonance? Obsessively composed? Highly emotional? Boredom-resistant?
Mingus, The Black Saint and the Sinner Lady Yes Yes Yes Yes, complex
Stravinsky, The Rite of Spring Yes Yes Yes Yes, complex
The Soft Machine, Third Yes Yes Yes Yes, complex
Schulze, Irrlicht Yes I think so? Yes Yes, slowly-evolving
Adams, Harmonielehre Yes Yes Yes Yes, complex
The Beatles, Sgt. Pepper Not enough dissonance Yes Yes No
Coleman, Free Jazz Yes Not really Sometimes Yes, complex
Amogh Symphony, Vectorscan Yes Not really Yes Yes, complex
Stockhausen, Licht cycle Too dissonant Yes Not often Yes, complex
Autechre, Chiastic Slide Yes Yes Yes Yes, complex
Anthony Braxton, For Four Orchestras Too dissonant Yes No Yes, complex


  1. I haven’t listened to enough non-Western classical or folk musics to know whether this theory of my favorite kind of music holds up across those styles. []
  2. Note that I like and sometimes love lots of music that doesn’t fit one or more of these criteria (including e.g. Sgt. Pepper), but I think my absolute favorite pieces of music tend to have all these properties. []

Applying economics to the law for the first time

Teles (2010) quotes Douglas Baird, a Stanford Law student in the 70s and later dean of the University of Chicago Law School:

In the early seventies, people like Posner would come in and spend six weeks studying family law, and they’d write a couple of articles explaining why everything everyone was saying in family law was 100 percent wrong [because they’d ignored economics].1 And then the replies would be, “No, we were only 80 percent wrong.” And Posner never got things exactly right, but he always turned everything upside down, and people talked about law differently… By the time I came along, and I wasn’t trained as economist, it was clear that… doing great work was easy… I used to say that this was just like knocking over Coke bottles with a baseball bat… You could just go in and write something revolutionary and go in tomorrow and write another article. I remember writing articles where the time between getting the idea and getting it accepted from a major law review was four days. I’m not Richard Posner, and few of us are. I got out of law school, and I was interested in bankruptcy law, which was inhabited by intellectual midgets… It was a complete intellectual wasteland. I got tenure by saying, “Jeez, a dollar today is worth more than a dollar tomorrow.” You got tenure for that! The reality is that there was just an open field begging for people to do great work.


  1. At least, I think “because they’d ignored economics” is the intended implication, from the context in Teles (2010). []

Musical shiver moments, 2015 edition

Back in 2004, I wrote a list of (what I now call) “musical shiver moments.” A musical shiver moment is a moment in a musical track that hits you with special emotional force (perhaps sending a shiver down your spine). It can be the climax of a pop song, or the beginning of a catchy riff, or a particularly well-conceived mood shift, etc.

A classic example is the moment the drums finally enter in Phil Collins’ “In the Air Tonight.” Another is the chord shift for the final performance of the chorus in Whitney Houston’s “I Will Always Love You.”

(Note that for most of these shiver moments to have their impact, you need to listen to all or most of the track up to that point, first. You can’t just jump right to the shiver moment.)

It’s been over a decade since I made my original list. Here are a few more I’ve discovered since then:

  • “Solo begins” – Carla Bley – Escalator Over the Hill: Hotel Overture – 7:45
  • “The world crumbles” – Arvo Pärt – Tabula Rasa: Ludus – 7:20
  • “I knew nothing of the horses” – Scott Walker – Tilt: Farmer in the City – 5:22
  • “The riff enters” – Justice – Cross: Genesis – 0:38
  • “Desperate cry” – Osvaldo Golijov – The Dreams and Prayers of Isaac the Blind: Agitato – 7:00
  • “Sudden slices” – Klaus Schulze – Irrlicht: Satz Ebene – 9:30
  • “The theme enters” – John Adams – Grand Pianola Music: On the Great Divide – 2:20
  • “Swelling” – M83 – Hurry Up We’re Dreaming: My Tears Are Becoming a Sea – 1:11
  • “The sweet” – Anna von Hausswolff – Ceremony: Red Sun – 2:10
  • “Drums enter” – The Shining – In the Kingdom of Kitsch You Will Be a Monster: Goretex Weather Report – 1:05
  • “Entrance” – Ryan Power – Identity Picks: Sweetheart – 0:05
  • “Tone added” – Jon Hopkins – Immunity: We Disappear – 2:20
  • “Verse 2 begins” – The Fiery Furnaces – EP: Here Comes the Summer – 1:30
  • “Tonight” – Frank Ocean – channel ORANGE: Pyramids – 5:22
  • “Electronic instruments solo” – James Blake – James Blake: I Never Learnt to Share – 3:40
  • “Guitar solo peaks” – Janelle Monae – The ArchAndroid: Cold War – 2:11
  • “Surprising transition” – Kanye West – My Beautiful Dark Twisted Fantasy: Lost in the World – 0:59
  • “Soprano rising” – Henryk Górecki – Symphony No. 3: 1st movement – 15:57
  • “New instrument enters” – Fuck Buttons – Tarot Sport: Surf Solar – 5:18
  • “Into the final stretch” – Lindstrøm – Where You Go I Go Too: Where You Go I Go Too – 22:46
  • “New instrument” – Modeselektor – Happy Birthday!: Sucker Pin – 3:10
  • “Rising” – Glasvegas – Glasvegas: Ice Cream Van – 3:30
  • “Quiet after the storm” – Howard Shore – The Fellowship of the Ring: The Bridge of Khazad Dum – 4:57
  • “Finale” – John Adams – Harmonielehre: Part I – 17:01
  • “Chorus” – Phantom Planet – Phantom Planet: Knowitall – 1:06
  • “Suddenly, a groove” – Herbie Hancock – Crossings: Sleeping Giant – 11:09
  • “You thought this track couldn’t get any more epic. You were wrong.” – Godspeed You! Black Emperor – Allelujah! Don’t Bend! Ascend!: We Drift Like Worried Fire – 18:48
  • “One of my favorite melodies, 2nd time” – Jean Sibelius – Symphony No. 5: 3rd movement – 1:55

(The time markings for the classical pieces will be off for some performances/recordings, naturally.)

What are some of your musical shiver moments?

added after initial publication of this post:

  • “From percussion to melody” – Nils Frahm – Spaces: For / Peter / Toilet Brushes / More – 13:49
  • “Final atmospheric passage” – Dave Douglas – Dark Territory: Loom Large – 4:57
  • “One last time” – John Murphy – Adagio in D Minor: Adagio in D Minor (2012 Remaster) – 3:04
  • “Building groove” – Tonbruket – Forevergreens: First Flight of a Newbird – 3:19
  • “Is this the climax yet?” – Blanck Mass – World Eater: Rhesus Negative – 7:44
  • “Panic” & “Double time” – The Algorithm – Polymorphic Code: Panic – 3:32 & 6:51
  • “Explosion” – The Great Harry Hillman – Tilt: 354° – 2:37
  • “Voices rise” – Eskaton – 4 Visions: Ecoute – 5:52
  • “After a brief pause” – Frederik Magle – Anastasis-Messe: Tenebrae / Lux Aeterna – 2:41
  • “Solo peaks” – Phish – A Live One: Stash (Clifford Ball 1994) – 11:05 & 11:32
  • [more to come]

Discovering CRISPR

Eric Lander tells his version of the story. Here is his take — which might or might not be reasonable — on lessons learned from the story:

The most important [lesson] is that medical breakthroughs often emerge from completely unpredictable origins. The early heroes of CRISPR were not on a quest to edit the human genome—or even to study human disease. Their motivations were a mix of personal curiosity (to understand bizarre repeat sequences in salt-tolerant microbes), military exigency (to defend against biological warfare), and industrial application (to improve yogurt production).

The history also illustrates the growing role in biology of “hypothesis-free” discovery based on big data. The discovery of the CRISPR loci, their biological function, and the tracrRNA all emerged not from wet-bench experiments but from open-ended bioinformatic exploration of large-scale, often public, genomic datasets. “Hypothesis-driven” science of course remains essential, but the 21st century will see an increasing partnership between these two approaches.

It is instructive that so many of the Heroes of CRISPR did their seminal work near the very start of their scientific careers (including Mojica, Horvath, Marraffini, Charpentier, Vogel, and Zhang)—in several cases, before the age of 30. With youth often comes a willingness to take risks—on uncharted directions and seemingly obscure questions—and a drive to succeed. It’s an important reminder at a time that the median age for first grants from the NIH has crept up to 42.

Notably, too, many did their landmark work in places that some might regard as off the beaten path of science (Alicante, Spain; France’s Ministry of Defense; Danisco’s corporate labs; and Vilnius, Lithuania). And, their seminal papers were often rejected by leading journals—appearing only after considerable delay and in less prominent venues. These observations may not be a coincidence: the settings may have afforded greater freedom to pursue less trendy topics but less support about how to overcome skepticism by journals and reviewers.

Finally, the narrative underscores that scientific breakthroughs are rarely eureka moments. They are typically ensemble acts, played out over a decade or more, in which the cast becomes part of something greater than what any one of them could do alone.

Warning: some people on Twitter are saying this article is basically PR for Lander’s Broad Institute, where Feng Zhang did his CRISPR work. Zhang is currently in a patent dispute over CRISPR with Jennifer Doudna.

ETA: Doudna comments on the article. And here is a “Landergate” link list.