Books, music, etc. from February 2016

Books

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

If you want to write about intelligence explosion…

Toby Walsh has published a short new paper on the likelihood of intelligence explosion. Unfortunately, it doesn’t engage with three of the most detailed and thoughtful previous analyses on the topic.

If you want to write about the likelihood and nature of intelligence explosion, I consider the following sources required reading, in descending order of value per page (Walsh’s paper misses 2, 3, and 5):

  1. Bostrom (2014), chapter 4
  2. Yudkowsky (2013)
  3. AI Impacts‘ posts on intelligence explosion: one, two (both 2015)
  4. Chalmers (2010)
  5. Hanson & Yudkowsky (2013)

There are many other sources worth reading, e.g. Hutter (2012), but they don’t make my cut as “required reading.”

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Bill Gates on AI timelines

On the latest episode of The Ezra Klein Show, Bill Gates elaborated a bit on his views about AI timelines (starting around 24:40):

Klein: I know you take… the risk of creating artificial intelligence that… ends up turning against us pretty seriously. I’m curious where you think we are in terms of creating an artificial intelligence…

Gates: Well, with robotics you have to think of three different milestones.

One is… not-highly-trained labor substitution. Driving, security guard, warehouse work, waiter, maid — things that are largely visual and physical manipulation… [for] that threshold I don’t think you’d get much disagreement that over the next 15 years that the robotic equivalents in terms of cost [and] reliability will become a substitute to those activities…

Then there’s the point at which what we think of as intelligent activities, like writing contracts or doing diagnosis or writing software code, when will the computer start to… have the capacity to work in those areas? There you’d get more disagreement… some would say 30 years, I’d be there. Some would say 60 years. Some might not even see that [happening].

Then there’s a third threshold where the intelligence involved is dramatically better than humanity as a whole, what Bostrom called a “superintelligence.” There you’re gonna get a huge range of views including people who say it won’t ever happen. Ray Kurzweil says it will happen at midnight on July 13, 2045 or something like that and that it’ll all be good. Then you have other people who say it can never happen. Then… there’s a group that I’m more among where you say… we’re not able to predict it, but it’s something that should start thinking about. We shouldn’t restrict activities or slow things down… [but] the potential that that exists even in a 50-year timeframe [means] it’s something to be taken seriously.

But those are different thresholds, and the responses are different.

See Gates’ previous comments on AI timelines and AI risk, here.

UPDATE 07/01/2016: In this video, Gates says that achieving “human-level” AI will take “at least 5 times as long as what Ray Kurzweil says.”

Reply to LeCun on AI safety

On Facebook, AI scientist Yann LeCun recently posted the following:

<not_being_really_serious>
I have said publicly on several occasions that the purported AI Apocalypse that some people seem to be worried about is extremely unlikely to happen, and if there were any risk of it happening, it wouldn’t be for another few decades in the future. Making robots that “take over the world”, Terminator style, even if we had the technology. would require a conjunction of many stupid engineering mistakes and ridiculously bad design, combined with zero regards for safety. Sort of like building a car, not just without safety belts, but also a 1000 HP engine that you can’t turn off and no brakes.

But since some people seem to be worried about it, here is an idea to reassure them: We are, even today, pretty good at building machines that have super-human intelligence for very narrow domains. You can buy a $30 toy that will beat you at chess. We have systems that can recognize obscure species of plants or breeds of dogs, systems that can answer Joepardy questions and play Go better than most humans, we can build systems that can recognize a face among millions, and your car will soon drive itself better than you can drive it. What we don’t know how to build is an artificial general intelligence (AGI). To take over the world, you would need an AGI that was specifically designed to be malevolent and unstoppable. In the unlikely event that someone builds such a malevolent AGI, what we merely need to do is build a “Narrow” AI (a specialized AI) whose only expertise and purpose is to destroy the nasty AGI. It will be much better at this than the AGI will be at defending itself against it, assuming they both have access to the same computational resources. The narrow AI will devote all its power to this one goal, while the evil AGI will have to spend some of its resources on taking over the world, or whatever it is that evil AGIs are supposed to do. Checkmate.
</not_being_really_serious>

Since LeCun has stated his skepticism about potential risks from advanced artificial intelligence in the past, I assume his “not being really serious” is meant to refer to his proposed narrow AI vs. AGI “solution,” not to his comments about risks from AGI. So, I’ll reply to his comments on risks from AGI and ignore his “not being really serious” comments about narrow AI vs. AGI.

First, LeCun says:

if there were any risk of [an “AI apocalypse”], it wouldn’t be for another few decades in the future

Yes, that’s probably right, and that’s what people like myself (former Executive Director of MIRI) and Nick Bostrom (author of Superintelligence, director of FHI) have been saying all along, as I explained here. But LeCun phrases this as though he’s disagreeing with someone.

Second, LeCun writes as though the thing people are concerned about is a malevolent AGI, even though I don’t know anyone is concerned about malevolent AI. The concern expressed in Superintelligence and elsewhere isn’t about AI malevolence, it’s about convergent instrumental goals that are incidentally harmful to human society. Or as AI scientist Stuart Russell put it:

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.  This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Books, music, etc. from January 2016

Books

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

Some books I’m looking forward to, February 2016 edition

* = added this round

Sutskever on Talking Machines

The latest episode of Talking Machines features an interview with Ilya Sutskever, the research director at OpenAI. His comments on long-term AI safety in particular were (starting around 28:10):

Interviewer: There’s a part of [OpenAI’s introductory blog post] that I found particularly interesting, which says “It’s hard to fathom It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.” So what are the reasonable questions that we should be thinking about in terms of safety now? …

Sutskever: … I think, and many people think, that full human-level AI … might perhaps be invented in some number of decades … [and] will obviously have a huge, inconceivable impact on society. That’s obvious. And when a technology will predictably have as much impact, there is nothing to lose from starting to think about the nature of this impact … and also whether there is any research that can be done today that will make this impact be more like the kind of impact we want.

The question of safety really boils down to this: …If you look at our neural networks that for example recognize images, they’re doing a pretty good job but once in a while they make errors [and it’s] hard to understand where they come from.

For example I use Google photo search to index my own photos… and it’s really accurate almost all the time, but sometimes I’ll search for a photo of a dog, let’s say, and it will find a photo [that is] clearly not a dog. Why does it make this mistake? You could say “Who cares? It’s just object recognition,” and I agree. But if you look down the line, what you’ll see is that right now we are [just beginning to] create agents, for example the Atari work of DeepMind or the robotics work of Berkeley, where you’re building a neural network that learns to control something which interacts with the world. At present, their cost functions [i.e. goal functions] are manually specified. But it… seems likely that eventually we will be building robots whose cost functions will be learned from demonstration, or from watching a YouTube video, or from the interpretation of natural text…

So now you have these really complicated cost functions that are difficult to understand, and you have a physical robot or some kind of software system which tries to optimize this cost function, and I think these are the kinds of scenarios that could be relevant for AI safety questions. Once you have a system like this, what do you need to do to be reasonably certain that it will do what you want it to do?

…because we don’t work on such systems [today], these questions may seem a bit premature, but once we start building reinforcement learning systems [which] do learn the cost function, I think this question will become much more sharply in focus. Of course it would also be nice to do theoretical research, but it’s not clear to me how it could be done.

Interviewer: So right now we have the opportunity to understand the fundamentals… and then apply them later as the research continues and grows and is able to create more powerful systems?

Sutskever: That would be the ideal case, definitely. I think it’s worth trying to do that. I think it may also be hard to do because it seems like we have such a hard time imagining [what] these future systems will look like. We can speak in general terms: Yes, there will be a cost function most likely. But how, exactly, will it be optimized? It’s a little hard to predict because if you could predict it we could just go ahead and build the systems already.

What are the best wireless earbuds?

I listen to music >10 hrs per day, and I love the convenience of wireless earbuds. They are tiny and portable, and I can do all kinds of stuff — work on something with my hands, take on/off my jacket or my messenger bag, etc. — without getting tangled up in a cord.

So which wireless earbuds are the best? For this kind of thing I always turn first to The Wirecutter, which publishes detailed investigations of consumer products, like Consumer Reports but free and often more up-to-date.

I bought their recommended wireless earbuds a while back, when their recommendation was the Jaybird Bluebuds X. After several months I lost that pair and bought the new Wirecutter recommendation, the JLab Epic Bluetooth. Those were terrible so I returned them and bought the now-available Jaybird X2, which has been awesome so far.

So long as a pair of wireless earbuds have decent sound quality and >6 hrs battery life, the most important thing to me is low frequency of audio cutting.

See, Bluetooth is a very weak kind of signal. It can’t really pass through your body, for example. That’s why it uses so little battery power, which is important for tiny things like wireless earbuds. As a result, I got fairly frequent audio cutting when trying to play music from my phone in my pants pocket to my Jaybird Bluebuds X. After some experimentation, I learned that audio cutting was less frequent if my phone was in my rear pocket, on the same side of my body as the earbuds’ Bluetooth receiver. But it still cut out maybe an average of 200 times an hour (mostly concentrated in particularly frustrating 10-minute periods with lots of cutting).

When I lost that pair and got the JLab Epic Bluetooth, I hoped that with the newer pair they’d have figured out some extra tricks to reduce audio cutting. Instead, the audio cutting was terrible. Even with my phone in the optimal pants pocket, there was usually near-constant audio cutting, maybe about 2000 times an hour on average. Moreover, when I used them while reclining in bed, I would get lots of audio cutting whenever my neck was pressed up against my pillow! So, pretty useless. I returned them to Amazon for a refund.

I replaced this pair with The Wirecutter’s 2nd choice, the Jaybird X2. So far these have been fantastic. In my first ~15 hours of using them I’ve gotten exactly two split-second audio cuts.

So if you want to make the leap to wireless earbuds, I recommend the Jaybird X2. Though if you don’t mind waiting, the Jaybird X3 and Jaybird Freedom are both coming out this spring, and they might be even better.

One final note: I got my last two pairs of wireless earbuds in white so that others can see I’m wearing them. With my original black Bluebuds X, people would sometimes talk at me for >30 seconds without realizing I couldn’t hear them because I had music in my ears.

MarginNote: the only iPhone app that lets you annotate both PDFs and epub files

As far as I can tell, MarginNote is the only iPhone app that lets you annotate & highlight both PDFs and epub files, and sync those annotations to your computer. And by “PDFs and epub files” I basically mean “all text files,” since Calibre and other apps can convert any text file into an epub, except for PDFs with tables and images. (The Kindle iPhone app can annotate text files, but can’t sync those annotations anywhere unless you bought the text directly from Amazon.)

This is important for people who like to read nonfiction “on the go,” like me — and plausibly some of my readers, so I figured I’d share my discovery.

The most-acclaimed new classical music of 2015

Barney Sherman’s 2015 classical music mega-meta-list is now up (part 1part 2), pulling together the results of 64 different “best of the 2015” lists from classical music critics.

Most of the selections are new performances of older works. Here, I want to highlight the contemporary classical pieces that were recorded for the first time in 2015 (and usually composed in the last few years), and that were included in 6 or more of the lists in Sherman’s analysis:

  1. Anna Thorvaldsdottir: In the Light of Air
  2. Andrew Norman: Play / Try
  3. Julia Wolfe: Anthracite Fields
  4. Various composers: Render
  5. Various composers: Clockworking
  6. John Luther Adams: The Wind in High Places
  7. John Adams: Absolute Jest

The links go to Spotify. If you’re relatively new to contemporary classical music, my guess is that Absolute Jest is the most widely accessible selection here, followed by Render.

When will videogame writing improve?

The best plays and films have had great writing for a long time. The best TV shows have had great writing for about a decade now. But the writing in the best videogames is still cringe-inducingly awful. This is despite the fact that videogame blockbusters regularly have production budgets of $50M or more. When will videogames hit their “golden age” (at least, for writing)?

My favorite kind of music

I think I’ve finally realized that I have a favorite kind of music, though unfortunately it doesn’t have a genre name, and it cuts across many major musical traditions — Western classical, jazz, rock, electronica, and possibly others.

I tend to love music that:

  1. Is primarily tonal but uses dissonance for effective contrast. (The Beatles are too tonal; Arnold Schoenberg and Cecil Taylor are too atonal; Igor Stravinsky and Charles Mingus are just right.)
  2. Obsessively composed, though potentially with substantial improvisation within the obsessively composed structure. (Coleman’s Free Jazz is too free. Amogh Symphony’s Vectorscan is innovative and complex but doesn’t sound like they tried very hard to get the compositional details right. The Rite of Spring and Chiastic Slide and even Karma are great.)
  3. Tries to be as emotionally affecting as possible, though this may include passages of contrastingly less-emotional music. (Anthony Braxton and Brian Ferneyhough are too cold and anti-emotional. Rich Woodson shifts around too quickly to ever build up much emotional “momentum.” Master of Puppets and Escalator Over the Hill and Tabula Rasa are great.)
  4. Is boredom-resistant by being fairly complex or by being long and subtly-evolving enough that I don’t get bored of it quickly. (The Beatles are too short and simple — yes, including their later work. The Soft Machine is satisfyingly complex and varied. The minimalists and Godspeed! You Black Emperor are often simple and repetitive, but their pieces are long enough and subtly-evolving enough that I don’t get bored of them.)

Property #2, I should mention, is pretty similar to Holden Karnofsky’s notion of “awe-inspiring” music. Via email, he explained:

One of the emotions I would like to experience is awe … A piece of music might be great because the artists got lucky and captured a moment, or because it’s just so insane that I can’t find anything else like it, or because I have an understanding that it was the first thing ever to do X, or because it just has that one weird sound that is so cool, but none of those make me go “Wow, this artist is awesome. I am in awe of them. I feel like the best parts of this are things they did on purpose, by thinking of them, by a combination of intelligence and sweat that makes me want to give them a high five. I really respect them for their achievement. I feel like if I had done this I would feel true pride that I had used the full extent of my abilities to do something that really required them.”

It’s no accident that most of the things that do this for me are “epic” in some way and usually took at least a solid year of someone’s life, if not 20 years, to create.

To illustrate further what I mean by each property, here’s how I would rate several musical works on each property:

Tonal w/ dissonance? Obsessively composed? Highly emotional? Boredom-resistant?
Mingus, The Black Saint and the Sinner Lady Yes Yes Yes Yes, complex
Stravinsky, The Rite of Spring Yes Yes Yes Yes, complex
The Soft Machine, Third Yes Yes Yes Yes, complex
Schulze, Irrlicht Yes I think so? Yes Yes, slowly-evolving
Adams, Harmonielehre Yes Yes Yes Yes, complex
The Beatles, Sgt. Pepper Not enough dissonance Yes Yes No
Coleman, Free Jazz Yes Not really Sometimes Yes, complex
Amogh Symphony, Vectorscan Yes Not really Yes Yes, complex
Stockhausen, Licht cycle Too dissonant Yes Not often Yes, complex
Autechre, Chiastic Slide Yes Yes Yes Yes, complex
Anthony Braxton, For Four Orchestras Too dissonant Yes No Yes, complex

 

Applying economics to the law for the first time

Teles (2010) quotes Douglas Baird, a Stanford Law student in the 70s and later dean of the University of Chicago Law School:

In the early seventies, people like Posner would come in and spend six weeks studying family law, and they’d write a couple of articles explaining why everything everyone was saying in family law was 100 percent wrong [because they’d ignored economics]. And then the replies would be, “No, we were only 80 percent wrong.” And Posner never got things exactly right, but he always turned everything upside down, and people talked about law differently… By the time I came along, and I wasn’t trained as economist, it was clear that… doing great work was easy… I used to say that this was just like knocking over Coke bottles with a baseball bat… You could just go in and write something revolutionary and go in tomorrow and write another article. I remember writing articles where the time between getting the idea and getting it accepted from a major law review was four days. I’m not Richard Posner, and few of us are. I got out of law school, and I was interested in bankruptcy law, which was inhabited by intellectual midgets… It was a complete intellectual wasteland. I got tenure by saying, “Jeez, a dollar today is worth more than a dollar tomorrow.” You got tenure for that! The reality is that there was just an open field begging for people to do great work.

 

Musical shiver moments, 2015 edition

Back in 2004, I wrote a list of (what I now call) “musical shiver moments.” A musical shiver moment is a moment in a musical track that hits you with special emotional force (perhaps sending a shiver down your spine). It can be the climax of a pop song, or the beginning of a catchy riff, or a particularly well-conceived mood shift, etc.

A classic example is the moment the drums finally enter in Phil Collins’ “In the Air Tonight.” Another is the chord shift for the final performance of the chorus in Whitney Houston’s “I Will Always Love You.”

(Note that for most of these shiver moments to have their impact, you need to listen to all or most of the track up to that point, first. You can’t just jump right to the shiver moment.)

It’s been over a decade since I made my original list. Here are a few more I’ve discovered since then:

  • “Solo begins” – Carla Bley – Escalator Over the Hill: Hotel Overture – 7:45
  • “The world crumbles” – Arvo Pärt – Tabula Rasa: Ludus – 7:20
  • “I knew nothing of the horses” – Scott Walker – Tilt: Farmer in the City – 5:22
  • “The riff enters” – Justice – Cross: Genesis – 0:38
  • “Desperate cry” – Osvaldo Golijov – The Dreams and Prayers of Isaac the Blind: Agitato – 7:00
  • “Sudden slices” – Klaus Schulze – Irrlicht: Satz Ebene – 9:30
  • “The theme enters” – John Adams – Grand Pianola Music: On the Great Divide – 2:20
  • “Swelling” – M83 – Hurry Up We’re Dreaming: My Tears Are Becoming a Sea – 1:11
  • “The sweet” – Anna von Hausswolff – Ceremony: Red Sun – 2:10
  • “Drums enter” – The Shining – In the Kingdom of Kitsch You Will Be a Monster: Goretex Weather Report – 1:05
  • “Entrance” – Ryan Power – Identity Picks: Sweetheart – 0:05
  • “Tone added” – Jon Hopkins – Immunity: We Disappear – 2:20
  • “Verse 2 begins” – The Fiery Furnaces – EP: Here Comes the Summer – 1:30
  • “Tonight” – Frank Ocean – channel ORANGE: Pyramids – 5:22
  • “Electronic instruments solo” – James Blake – James Blake: I Never Learnt to Share – 3:40
  • “Guitar solo peaks” – Janelle Monae – The ArchAndroid: Cold War – 2:11
  • “Surprising transition” – Kanye West – My Beautiful Dark Twisted Fantasy: Lost in the World – 0:59
  • “Soprano rising” – Henryk Górecki – Symphony No. 3: 1st movement – 15:57
  • “New instrument enters” – Fuck Buttons – Tarot Sport: Surf Solar – 5:18
  • “Into the final stretch” – Lindstrøm – Where You Go I Go Too: Where You Go I Go Too – 22:46
  • “New instrument” – Modeselektor – Happy Birthday!: Sucker Pin – 3:10
  • “Rising” – Glasvegas – Glasvegas: Ice Cream Van – 3:30
  • “Quiet after the storm” – Howard Shore – The Fellowship of the Ring: The Bridge of Khazad Dum – 4:57
  • “Finale” – John Adams – Harmonielehre: Part I – 17:01
  • “Chorus” – Phantom Planet – Phantom Planet: Knowitall – 1:06
  • “Suddenly, a groove” – Herbie Hancock – Crossings: Sleeping Giant – 11:09
  • “You thought this track couldn’t get any more epic. You were wrong.” – Godspeed You! Black Emperor – Allelujah! Don’t Bend! Ascend!: We Drift Like Worried Fire – 18:48
  • “One of my favorite melodies, 2nd time” – Jean Sibelius – Symphony No. 5: 3rd movement – 1:55

(The time markings for the classical pieces will be off for some performances/recordings, naturally.)

What are some of your musical shiver moments?

added after initial publication of this post:

  • “From percussion to melody” – Nils Frahm – Spaces: For – Peter – Toilet Brushes – More – 13:49
  • “Final atmospheric passage” – Dave Douglas – Dark Territory: Loom Large – 4:57
  • “One last time” – John Murphy – Adagio in D Minor: Adagio in D Minor (2012 Remaster) – 3:04
  • “Building groove” – Tonbruket – Forevergreens: First Flight of a Newbird – 3:19
  • “Is this the climax yet?” – Blanck Mass – World Eater: Rhesus Negative – 7:44
  • [more to come]

Discovering CRISPR

Eric Lander tells his version of the story. Here is his take — which might or might not be reasonable — on lessons learned from the story:

The most important [lesson] is that medical breakthroughs often emerge from completely unpredictable origins. The early heroes of CRISPR were not on a quest to edit the human genome—or even to study human disease. Their motivations were a mix of personal curiosity (to understand bizarre repeat sequences in salt-tolerant microbes), military exigency (to defend against biological warfare), and industrial application (to improve yogurt production).

The history also illustrates the growing role in biology of “hypothesis-free” discovery based on big data. The discovery of the CRISPR loci, their biological function, and the tracrRNA all emerged not from wet-bench experiments but from open-ended bioinformatic exploration of large-scale, often public, genomic datasets. “Hypothesis-driven” science of course remains essential, but the 21st century will see an increasing partnership between these two approaches.

It is instructive that so many of the Heroes of CRISPR did their seminal work near the very start of their scientific careers (including Mojica, Horvath, Marraffini, Charpentier, Vogel, and Zhang)—in several cases, before the age of 30. With youth often comes a willingness to take risks—on uncharted directions and seemingly obscure questions—and a drive to succeed. It’s an important reminder at a time that the median age for first grants from the NIH has crept up to 42.

Notably, too, many did their landmark work in places that some might regard as off the beaten path of science (Alicante, Spain; France’s Ministry of Defense; Danisco’s corporate labs; and Vilnius, Lithuania). And, their seminal papers were often rejected by leading journals—appearing only after considerable delay and in less prominent venues. These observations may not be a coincidence: the settings may have afforded greater freedom to pursue less trendy topics but less support about how to overcome skepticism by journals and reviewers.

Finally, the narrative underscores that scientific breakthroughs are rarely eureka moments. They are typically ensemble acts, played out over a decade or more, in which the cast becomes part of something greater than what any one of them could do alone.

Warning: some people on Twitter are saying this article is basically PR for Lander’s Broad Institute, where Feng Zhang did his CRISPR work. Zhang is currently in a patent dispute over CRISPR with Jennifer Doudna.

ETA: Doudna comments on the article. And here is a “Landergate” link list.

Industry funding defeats transitivity

From The Philosophy of Evidence-Based Medicine:

[One problem] is that industry-sponsored trials are more likely to show a beneficial effect than non-industry funded trials [261,536–540]… This bias can have paradoxical consequences. For example, Heres et al. [541] examined randomized trials that compared different antipsychotic medications. They found that olanzapine beat risperidone, risperidone beat quetiapine, and quetiapine beat olanzapine! The relative success of the drugs was directly related to who sponsored the trial. For example, if the manufacturers of risperidone sponsored the trial, then risperidone was more likely to appear more effective than the others.

The reference is Heres et al. (2006).

Cochrane’s trick

From The Philosophy of Evidence-Based Medicine:

Archie Cochrane (who inspired the creation of the Cochrane Collaboration) explained what happened when he reported the preliminary results of a trial that compared home versus hospital treatment for varicose veins. The Medical Research Council gave its ethical approval, but cardiologists in the planned location of the trial (Cardiff) refused to take part because they were certain, based on their expertise, that hospital treatment was far superior…

Eventually Cochrane succeeded at beginning the trial in Bristol. Six months into the trial, the ethics committee called on Cochrane to compile and report on the preliminary results. At that stage, home care showed a slight but not statistically significant benefit. Cochrane, however, decided to play a trick on his colleagues: he prepared two reports, one with the actual number of deaths, and one with the number reversed. The rest of the story is best told from Cochrane’s perspective:

“As we were going into the committee, in the anteroom, I showed some cardiologists the results. They were vociferous in their abuse: ‘Archie,’ they said, ‘we always thought you were unethical. You must stop the trial at once.’ I let them have their way for some time and then apologised and gave them the true results, challenging them to say, as vehemently, that coronary care units should be stopped immediately. There was dead silence…”

Three types of nonfiction books I read

I realized recently that when I want to learn about a subject, I mentally group the available books into three categories.

I’ll call the first category “convincing.” This is the most useful kind of book for me to read on a topic, but for most topics, no such book exists. Many basic textbooks on the “hard” sciences (e.g. “settled” physics and chemistry) and the “formal” sciences (e.g. “settled” math, statistics, and computer science) count. In the softer sciences (including e.g. history), I know of very few books with the intellectual honesty and epistemic rigor to be convincing (to me) on their own. David Roodman’s book on microfinance, Due Diligence, is the only example that comes to mind as I write this.

Don’t get me wrong: I think we can learn a lot from studying softer sciences, but rarely is a single book on the softer sciences written in such a way as to be convincing to me, unless I know the topic well already.

I think of my 2nd category as “raw data.” These books make a good case that the data they present were collected and presented in a fairly reasonable way, and I find it useful to know what the raw data are, but if and when the book attempts to persuade me of non-obvious causal hypotheses, I find the book illuminating but unconvincing (on its own). Some examples:

Finally, my 3rd category for nonfiction is “food for thought.” Besides being unconvincing about non-obvious causal inferences, these books also fail to make a good case that the data supporting their arguments were collected and presented in a reasonable way. So what I get from them is just some basic terminology, and some hypotheses and arguments and stories I didn’t know about before. This category includes the vast majority of all non-fiction, e.g.:

My guess is that I’m more skeptical than most heavy readers of non-fiction, including most scientists. I’m sure I’ll blog more in the future about why.

Some 2016 movies I’m looking forward to

I’m only counting films to be first released in 2016 according to IMDB. In descending order of how confident I am that I’ll rate it as “really liked” or “loved”:

  1. Coen brothers, Hail, Caesar!
  2. Stanton, Finding Dory
  3. Linklater, Everybody Wants Some
  4. Nichols, Midnight Special
  5. Villeneuve, Story of Your Life
  6. Dardenne brothers, The Unknown Girl
  7. Farhadi, Seller
  8. Scorsese, Silence

For all other movies coming out in 2016 that I’ve seen mentioned, I’m <70% confident I’ll rate them as “really liked” or “loved.”