June 2015 links

Men have more hand-grip strength than women (on average), to an even greater degree than I thought.

I now have a Goodreads profile. I’m not going to bother making it exhaustive.

100+ interesting data sets for statistics.

Five silly fonts inspired by mathematical theorems or open problems.

Pre-registration prizes!

Critique of trim-and-fill as a technique for correcting for publication bias.

Interesting recent interview with Ioannidis.

I have begun work on A beginner’s guide to modern art jazz.

Data analysis subcultures.

Scraping for Journalism, a guide by ProPublica.


AI stuff

Scott Alexander, “AI researchers on AI risk” and “No time like the present for AI safety work.”

Stuart Russell & others in Nature on autonomous weapons.

MIT’s Cheetah robot now autonomously jumps over obstacles (vide0), and an injured robot learns how to limp.

Scherer’s “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies” discusses AI regulation possibilities in the context of both medium-term and long-term challenges, including superintelligence. I remain agnostic about whether regulation would be helpful at this stage.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Books, music, etc. from May 2015


Learn or Die has some good chapters on Bridgewater, the rest is meh.

I read about half of Foragers, Farmers, and Fossil Fuels. The parts I read were good, but I lost interest because the book confirmed for me that we don’t have good evidence about the values of people over most of history, and so even the most well-argued book possible on the subject couldn’t be all that compelling. We have much better evidence for the historical measures used in Morris’ earlier book, The Measure of Civilization.

Consciousness and the Brain has several good chapters, and some chapters that are a bit too excited about the author’s personal favorite theory of consciousness.

The Sixth Extinction was an enjoyable read, but don’t go in expecting any argument.

Favorite tracks or albums discovered this month

Favorite movies discovered this month


Of course the big news for me this month was that I took a new job at GiveWell, leaving MIRI in the capable hands of Nate Soares.

May 2015 links

Practical Typography is really good.

A short history of science blogging.

The evolution of popular music: USA 1960–2010.


AI stuff

The Economist’s May 9th cover story is on the long-term future of AI: short bitlong bit. The longer piece basically just reviews the state of AI and then says that there’s no existential threat in the near term. But of course almost everyone writing about AI risk agrees with that. Sigh.

6-minute video documentary about industrial robots replacing workers in China.

Bostrom’s TED talk on machine superintelligence.

PBS YouTube series It’s Okay to Be Smart gets AI risk basically right, though it overstates the probability of hard takeoff.

Sam Harris says more (wait ~20s for it to load) about the future of AI, on The Joe Rogan Experience. I think he significantly overstates how quickly AGI could be built (10 years is pretty inconceivable to me), and his “20,000 years of intellectual progress in a week” metaphor is misleading (because lots of intellectual progress requires relatively slow experimental interaction with the world). But I think he’s right about much else in the discussion.

NASA, “Certification considerations for adaptive systems

Lin, “Why ethics matters for autonomous cars

Books, music, etc. from April 2015


Ronson’s So You’ve Been Publicly Shamed was decent.

Carrier’s Proving History and On the Historicity of Jesus were decent. Of course, if they contained a bunch of bogus claims about matters of ancient history, I mostly wouldn’t know, but the published criticisms of these books that exist so far don’t seem to have identified any major problems on that front. I think the application of probability theory to historical method is less straightforward than Carrier presents it to be (esp. re: assignment of priors via reference classes), but he’s certainly right that his approach makes one’s arguments clearer and easier to productively criticize. Also, I continue to think Jesus mythicism should be considered quite plausible (> 20% likely), even though mainstream historians almost completely dismiss mythicism. As far as I can tell, these two books constitute mythicism’s best defense yet, though this isn’t saying much.

Goodman’s Future Crimes is inaccurate and hyperbolic about exponential tech trends and a few other things, but most of the book is a sober account about current and future tech-enabled criminal and security risks, and also accidentally constitutes a decent reply to the question “but how would an unfriendly AI affect the physical world?”

I got bored with The Powerhouse and gave up on it, but that might’ve been because I didn’t like the audiobook narrator.

I read Taubes’ Why We Get Fat and some sections of GCBC. I’m no expert in nutrition, but my impression is that Taubes doesn’t accurately represent the current state of knowledge, and avoids discussing evidence that contradicts his views. See e.g. Guyenet and Bray.

Vaillant’s Triumphs of Experience seemed pretty sketchy in how it was interpreting its evidence, but I probably won’t take the time to dig deep to confirm or disconfirm that suspicion. But e.g. the author often makes statements about the American population in general on the basis of results from a study for which nearly all the subjects were elite white Harvard males.

Zuk’s Paleofantasy covered lots of interesting material, but also spent lots of time on arguments like “Remember, evolution isn’t directed!” (Do paleo fans think it is?) and “Sure, farmers worked more than foragers, but foragers worked more than pre-human apes, so why not say everything went downhill after the pre-human apes?” (Uh, because we can’t make ourselves into pre-human apes, but we can live and eat more like foragers if we try?)

I skimmed Singer’s The Most Good You Can Do very quickly, since I’m already familiar with the arguments and stories found within. At a glance it looks like a good EA 101 book, probably the best currently available. Give it as a gift to your family and non-EA friends.

Favorite tracks or albums discovered this month

Favorite movies discovered this month

Other updates

April links

This is why I consume so many books and articles even though I don’t remember most of their specific content when asked about them. I’m also usually doing a breadth-first search to find candidates that might be worth a deep dive. E.g. I think I found Ian Morris faster than I otherwise would have because I’ve been doing breadth-first search.

Notable lessons (so far) from the Open Philanthropy Project.

A prediction market for behavioral economics replication attempts.

Towards a 21st century orchestral canon. And a playlist resulting from that discussion.


AI stuff

Tegmark, Russell, and Horvitz on the future of AI on Science Friday.

Eric Drexler has a new FHI technical report on superintelligence safety.

Brookings Institute blog post about AI safety, regulation, and superintelligence.

Books, music, etc. from March 2015


Livio’s Brilliant Blunders was decent.

Fox’s The Game Changer didn’t have much concrete advice. Mostly it was a sales pitch for motivation engineering without saying much about how to do it within an organization.

Drucker’s Management Challenges for the 21st Century was a mixed bag, and included as much large-scale economic speculation as it did management advice.

Adams’ How to Fail at Almost Everything and Still Win Big was a very mixed bag of advice, which then tries unconvincingly to say it isn’t a book of advice.

Favorite tracks or albums discovered this month

Stuff I wrote elsewhere

March 2015 links, part 2

It’ll never work! A collection of experts being wrong about what is technologically feasible.

GiveWell update on its investigations into global catastrophic risks. My biggest disagreement is that I think nano-risk deserves more attention, if someone competent can be found to analyze the risks in more detail. GiveWell’s prioritization of biosecurity makes complete sense given their criteria.

Online calibration test with database of 150,000+ questions.

An ambitious Fermi estimate exercise: Estimating the energy cost of artificial evolution.

Nautilus publishes an excellent and wide-ranging interview with Scott Aaronson.

Gelman on a great old paper by Meehl.


AI stuff

Video: robot autonomously folds pile of 5 previously unseen towels.

Somehow I had previously missed the Dietterich-Horvitz letter on Benefits and Risks of AI.

Robin Hanson reviews Martin Ford’s new book on tech unemployment.

Heh. That “stop the robots” campaign at SXSW was a marketing stunt for a dating app.

Winfield, Towards an Ethical Robot. They actually bothered to build simple consequentialist robots that obey a kind-of Asimovian rule.

March 2015 links

Cotton-Barratt, Allocating risk mitigation across time.

The new Ian Morris book sounds very Hansonian, which probably means it’ll end up being one of my favorite books of 2015 when I have a chance to read it.

Why do we pay pure mathematicians? A dialogue.

Watch a FiveThirtyEight article get written, keystroke by keystroke. Scott Alexander, will you please record yourself writing one blog post?

Grace, The economy of weirdness.

Kahneman interviews Harari about the future.

On March 14th, there will be wrap parties for Harry Potter and the Methods of Rationality in at least 15 different countries. I’m assuming this is another first for a fanfic.


AI stuff

YC President Sam Altman on superhuman AI: part 1, part 2. I agree with most of what he writes, the biggest exceptions being that I think (1) AGI probably isn’t the Great Filter, (2) AI progress isn’t a double exponential, and (3) I don’t have much of an opinion on the role of regulation, as it’s not something I’ve tried hard to figure out.

Stuart Russell and Rodney Brooks debated the value alignment problem at Davos 2015. (Watch at 2x speed.)

Pretty good coverage of MIRI’s value learning paper at Nautilus.

Books, music, etc. from February 2015

Decent books

As Bryan Caplan wroteThe Moral Case for Fossil Fuels was surprisingly good. I think the book is factually inaccurate and cherry-picked in several places, and it seems fairly motivated throughout, but nevertheless I think the big picture argument basically goes through, and it’s an enjoyable read.

I didn’t discover any albums or movies I loved in February 2015, but I did finish Breaking Bad, which probably beats out The Sopranos and The Wire as the most consistently great TV drama ever.

February 2015 links

Yes, please: When talking about variation in intelligence, use variation in height as a sanity-check on your intuitions.

Steven Pinker replies to a book symposium on Better Angels of Our Nature.

Dennis Pamlin (Global Challenges Foundation) and Stuart Armstrong (FHI) have issued a new 212-page report: 12 Risks that threaten human civilisation. I don’t like the “infinite impact” framing, but interesting novel contributions of the report include:

  1. Page 20: a graph of relations between different risks.
  2. Page 21: a chart of the technical and collaboration difficulty of each risk.
  3. Page 22: a comparison of risks by how estimable they are, by how much data is available about them, and by how much we understand the chain of events from present actions to the risk events.
  4. For each of the risks, a causal diagram with different levels of uncertainty for each node.
  5. Lots more.


AI stuff

The World Economic Forum’s Global Risks 2015 report discusses the superintelligence alignment challenge quite clearly — see box 2.8 on page 40.

Scott Aaronson explains what we know so far about what quantum computing could do for machine learning. It’s “a simple question with a complicated answer.”

Future of Life Institute, “A survey of research questions for robust and beneficial AI.” Because this document surveys strategic/forecasting research in more detail than the earlier “research priorities” document, it cites 7 of my own articles, and several more by others at MIRI.

Books, music, etc. from January 2015

Decent books

Melzer’s Philosophy Between the Lines was quite good, and should settle the argument as to whether or not esotericism was common throughout most of philosophical history. Now the question is, how much can we know about what historical philosophers actually thought? Should we bother to try to figure that out?

Morris’ The Measure of Civilization is perhaps the most useful book I’ve read so far on the great divergence (“Why did the industrial revolution happen in Britain rather than China?”). Read this one instead of Why the West Rules, for Now. Hopefully I’ll be blogging about this later.

Shermer’s The Moral Arc was not well-argued. Read the comparable section of Pinker’s Better Angels of Our Nature instead.

Favorite albums discovered this month

Movies I loved this month

  • Iñárritu, Birdman (2014)
  • Miller, Foxcatcher (2014)
  • Chazelle, Whiplash (2014)
  • Chandor, A Most Violent Year (2014)
  • Gilroy, Nightcrawler (2014)
  • Clément, Purple Noon (1960)

Luke stuff elsewhere

January 2015 links, part 2

Damned amazing archery tricks. (But also see this.)

How economists came to dominate the conversation.

The 2014 annual report of the UK government’s chief scientific advisor is called “Innovation: Managing Risk, Not Avoiding It.” Chapter 10 is “Managing Risks from Emerging Technologies,” by Nick Beckstead and Toby Ord, which also includes a case study by Huw Price and Seán Ó hÉigeartaigh.

Oh, good. The Uberification of puppies has begun.


AI Stuff

I previously linked to the FLI open letter on robust and beneficial AI. That letter was one output from an FLI conference in Puerto Rico that MIRI and many others attended. The conference page outlines the program, includes slides for many of the talks, and displays a group photo of the participants. No, I can’t tell you what specific people said at the conference, because it was governed by Chatham House Rules. I congratulate the FLI team for a very successful conference!

Also, in case you missed it, Elon Musk has donated $10M to support the kind of work outlined in the open letter. FLI will parcel it out in grants ala Max & Anthony’s other organization, FQXI. See the grant competition details here. Initial submissions due March 1st.

Talking Machines is an excellent new podcast on machine learning.

The new Edge.org Annual Question is “What do you think about AI?

January 2015 links

Supposedly “smart” characters (e.g. scientists) are, in movies, almost universally stupid. I assume this is true for novels as well. Thankfully Yudkowsky of HPMoR has now explained How to Write Intelligent Characters. I’m sure aspiring screenwriters will now rush to read this immediately after they finish Save the Cat.

What are philanthropic foundations for? Rob Reich, Tyler Cowen, Paul Brest and others debate.

The Scientist lists the “top 10” science retractions of 2014.


AI stuff

FLI has published an open letter called “Research Priorities for Robust and Beneficial Artificial Intelligence,” which says that AI progress is now quite steady or even rapid, that the societal impacts will be huge, and that therefore we need more research on how to reap AI’s benefits while avoiding its pitfalls.

Signatories include top academic AI scientists (Stuart Russell, Geoff Hinton, Tom Mitchell, Eric Horvitz, Tom Dietterich, Bart Selman, Moshe Vardi, etc.), top industry AI scientists (Peter Norvig, Yann LeCun, DeepMind’s founders, Vicarious’ founders), technology leaders (Elon Musk, Jaan Tallinn), and many others you’ve probably heard of (Stephen Hawking, Martin Rees, Joshua Greene, Sam Harris, etc.).

The attached research priorities document includes many example lines of research, including MIRI’s research agenda. (Naturally, the signatories have differing opinions about which example lines of research are most urgently needed, and most signatories probably know very little about MIRI’s agenda and thus don’t mean to necessarily endorse it in particular.)

You can sign the letter yourself if you want. Coverage at BBC, CNBC, The Independent, The Verge, FT, and elsewhere.

Paul Christiano (UC Berkeley) summarizes his own ideas about long-term AI safety work that can be done now.

When people ask me what general-AI benchmark I think should replace the Turing test, I usually start by mentioning video games. This 2-page paper explains why that’s such a handy benchmark.

Books, and stuff I wrote elsewhere, in December 2014

Decent books finished in December:

Why the West Rules—For Now was quite interesting. I may have to return to this one later for some detailed analysis. Also see the 200+ page supplementary material PDF, which I guess was later expanded into a book.

I also read much of the nonfiction section of The David Foster Wallace Reader. It is excellent writing in many ways, but it’s not the kind of excellent writing I like. I prefer nonfiction writing closer to the “classic style” advocated by Thomas & Turner and Pinker. DFW’s style is, for me, too focused on clever language and clever constructions. I prefer writing that gets out of the way so that I can focus my attention on the subject matter rather than on the way it’s being explained. I guess a DFW essay is nonfiction for people who like reading artsy fiction like James Joyce, which isn’t me.

Carr’s The Glass Cage confirmed my suspicion that I wouldn’t like Carr’s thinking much, e.g. this passage on Facebook: “Zuckerberg celebrates [Facebook’s features] as ‘frictionless sharing’… But there’s something repugnant about applying the bureaucratic ideals of speed, productivity, and standardization to our relations with others. The most meaningful bonds aren’t forged through transactions in a marketplace or other routinized exchanges of data. People aren’t nodes on a network grid. The bonds require trust and courtesy and sacrifice, all of which, at least to a technocrat’s mind, are sources of inefficiency and inconvenience.”

The Beginning of Infinity is David Deutsch’s account of his own philosophy of science. The basic claims seem to be that (1) moral and social progress comes from knowledge, (2) knowledge comes from good scientific explanations, and (3) good scientific explanations are those which fit the facts and are hard to vary. On (1) I think he neglects genuine information hazards on the one hand, and non-knowledge sources of social progress on the other. I think (2) leaves out other kinds of knowledge, like knowing a strong trend in data without having any theory of it. And as for (3), I can’t tell if he’s rejecting the standard Bayesian account of scientific explanation (Howson & Urbach 2005; Yudkowsky 2005), or if he just likes to emphasize the part of that which awards more points for models/predictions with hard-to-vary parameters. If he’s rejecting the Bayesian account, he doesn’t say why. There are also lots of long, rambling, often wrong tangents about memes and AI and other topics, e.g. this: “Most advocates of the Singularity believe that, soon after the AI breakthrough, superhuman minds will be constructed and that then, as Vinge put it, ‘the human era will be over.’ But my discussion of the universality of human minds rules out that possibility. Since humans are already universal explainers and constructors, they can already transcend their parochial origins, so there can be no such thing as a superhuman mind as such.”

I gave up on Life’s Ratchet. It seems pretty good, actually, but it had too much technical information for me to follow in audiobook form.

Stuff I wrote elsewhere in December:

Assorted links

  • Aeon: Why has human progress slowed?
  • Gary Marcus on the future of AI (EconTalk). Kind of weird that Russ & Gary wonder why there’d be a risk from general AI, but then don’t mention the standard “risk via convergent instrumental goals” argument.
  • NYT reports on a Horvitz-funded 100-year Stanford program studying social impacts of AI. The white paper makes clear that the superintelligence control problem is among the intended focus areas!

Books, and stuff I wrote elsewhere, for November 2014

Decent books finished in November:

In case you hadn’t noticed, my choice of books is dominated by whether a book is available in audiobook, since the only time I have available to consume books these days is when my chronic insomnia is blocking me from sleeping anyway, so I put on my eye mask and open the Audible app on my phone. If I had more hours each month during which my eyeballs were still working, I’d be consuming a pretty different set of books, and I’d be reading them more carefully and critically rather than zooming through lots of easy material quickly.

Stuff I wrote elsewhere in November: