MarginNote: the only iPhone app that lets you annotate both PDFs and epub files

As far as I can tell, MarginNote is the only iPhone app that lets you annotate & highlight both PDFs and epub files, and sync those annotations to your computer. And by “PDFs and epub files” I basically mean “all text files,” since Calibre and other apps can convert any text file into an epub, except for PDFs with tables and images. (The Kindle iPhone app can annotate text files, but can’t sync those annotations anywhere unless you bought the text directly from Amazon.)

This is important for people who like to read nonfiction “on the go,” like me — and plausibly some of my readers, so I figured I’d share my discovery.

When will videogame writing improve?

The best plays and films have had great writing for a long time. The best TV shows have had great writing for about a decade now. But the writing in the best videogames is still cringe-inducingly awful. This is despite the fact that videogame blockbusters regularly have production budgets of $50M or more. When will videogames hit their “golden age” (at least, for writing)?

My favorite kind of music

I think I’ve finally realized that I have a favorite kind of music, though unfortunately it doesn’t have a genre name, and it cuts across many major musical traditions — Western classical, jazz, rock, electronica, and possibly others.

I tend to love music that:

  1. Is primarily tonal but uses dissonance for effective contrast. (The Beatles are too tonal; Arnold Schoenberg and Cecil Taylor are too atonal; Igor Stravinsky and Charles Mingus are just right.)
  2. Obsessively composed, though potentially with substantial improvisation within the obsessively composed structure. (Coleman’s Free Jazz is too free. Amogh Symphony’s Vectorscan is innovative and complex but doesn’t sound like they tried very hard to get the compositional details right. The Rite of Spring and Chiastic Slide and even Karma are great.)
  3. Tries to be as emotionally affecting as possible, though this may include passages of contrastingly less-emotional music. (Anthony Braxton and Brian Ferneyhough are too cold and anti-emotional. Rich Woodson shifts around too quickly to ever build up much emotional “momentum.” Master of Puppets and Escalator Over the Hill and Tabula Rasa are great.)
  4. Is boredom-resistant by being fairly complex or by being long and subtly-evolving enough that I don’t get bored of it quickly. (The Beatles are too short and simple — yes, including their later work. The Soft Machine is satisfyingly complex and varied. The minimalists and Godspeed! You Black Emperor are often simple and repetitive, but their pieces are long enough and subtly-evolving enough that I don’t get bored of them.)

Property #2, I should mention, is pretty similar to Holden Karnofsky’s notion of “awe-inspiring” music. Via email, he explained:

One of the emotions I would like to experience is awe … A piece of music might be great because the artists got lucky and captured a moment, or because it’s just so insane that I can’t find anything else like it, or because I have an understanding that it was the first thing ever to do X, or because it just has that one weird sound that is so cool, but none of those make me go “Wow, this artist is awesome. I am in awe of them. I feel like the best parts of this are things they did on purpose, by thinking of them, by a combination of intelligence and sweat that makes me want to give them a high five. I really respect them for their achievement. I feel like if I had done this I would feel true pride that I had used the full extent of my abilities to do something that really required them.”

It’s no accident that most of the things that do this for me are “epic” in some way and usually took at least a solid year of someone’s life, if not 20 years, to create.

To illustrate further what I mean by each property, here’s how I would rate several musical works on each property:

Tonal w/ dissonance? Obsessively composed? Highly emotional? Boredom-resistant?
Mingus, The Black Saint and the Sinner Lady Yes Yes Yes Yes, complex
Stravinsky, The Rite of Spring Yes Yes Yes Yes, complex
The Soft Machine, Third Yes Yes Yes Yes, complex
Schulze, Irrlicht Yes I think so? Yes Yes, slowly-evolving
Adams, Harmonielehre Yes Yes Yes Yes, complex
The Beatles, Sgt. Pepper Not enough dissonance Yes Yes No
Coleman, Free Jazz Yes Not really Sometimes Yes, complex
Amogh Symphony, Vectorscan Yes Not really Yes Yes, complex
Stockhausen, Licht cycle Too dissonant Yes Not often Yes, complex
Autechre, Chiastic Slide Yes Yes Yes Yes, complex
Anthony Braxton, For Four Orchestras Too dissonant Yes No Yes, complex

 

Three types of nonfiction books I read

I realized recently that when I want to learn about a subject, I mentally group the available books into three categories.

I’ll call the first category “convincing.” This is the most useful kind of book for me to read on a topic, but for most topics, no such book exists. Many basic textbooks on the “hard” sciences (e.g. “settled” physics and chemistry) and the “formal” sciences (e.g. “settled” math, statistics, and computer science) count. In the softer sciences (including e.g. history), I know of very few books with the intellectual honesty and epistemic rigor to be convincing (to me) on their own. David Roodman’s book on microfinance, Due Diligence, is the only example that comes to mind as I write this.

Don’t get me wrong: I think we can learn a lot from studying softer sciences, but rarely is a single book on the softer sciences written in such a way as to be convincing to me, unless I know the topic well already.

I think of my 2nd category as “raw data.” These books make a good case that the data they present were collected and presented in a fairly reasonable way, and I find it useful to know what the raw data are, but if and when the book attempts to persuade me of non-obvious causal hypotheses, I find the book illuminating but unconvincing (on its own). Some examples:

Finally, my 3rd category for nonfiction is “food for thought.” Besides being unconvincing about non-obvious causal inferences, these books also fail to make a good case that the data supporting their arguments were collected and presented in a reasonable way. So what I get from them is just some basic terminology, and some hypotheses and arguments and stories I didn’t know about before. This category includes the vast majority of all non-fiction, e.g.:

My guess is that I’m more skeptical than most heavy readers of non-fiction, including most scientists. I’m sure I’ll blog more in the future about why.

One of my favorite melodies, rediscovered

There are about a dozen melodies that I find myself humming and whistling without realizing it. One is “Yellow Submarine.” Another begins at about 2:20 in Adams’ “On the Dominant Divide.”

Another is one that I’ve been humming for years but couldn’t remember where I had heard it.

Well, today, I finally stumbled into that melody once again! It turns out it’s the melody that begins at about 1:09 into the 3rd movement of Sibelius’ 5th symphony.

Ahhhhhhhh. So good.

If you’re an “AI safety lurker,” now would be a good time to de-lurk

Recently, the study of potential risks from advanced artificial intelligence has attracted substantial new funding, prompting new job openings at e.g. Oxford University and (in the near future) at Cambridge University, Imperial College London, and UC Berkeley.

This is the dawn of a new field. It’s important to fill these roles with strong candidates. The trouble is, it’s hard to find strong candidates at the dawn of a new field, because universities haven’t yet begun to train a steady flow of new experts on the topic. There is no “long-term AI safety” program for graduate students anywhere in the world.

Right now the field is pretty small, and the people I’ve spoken to (including e.g. at Oxford) seem to agree that it will be a challenge to fill these roles with candidates they already know about. Oxford has already re-posted one position, because no suitable candidates were found via the original posting.

So if you’ve developed some informal expertise on the topic — e.g. by reading books, papers, and online discussions — but you are not already known to the folks at Oxford, Cambridge, FLI, or MIRI, now would be an especially good time to de-lurk and say “I don’t know whether I’m qualified to help, and I’m not sure there’s a package of salary, benefits, and reasons that would tempt me away from what I’m doing now, but I want to at least let you know that I exist, I care about this issue, and I have at least some relevant skills and knowledge.”

Maybe you’ll turn out not to be a good candidate for any of these roles. Maybe you’ll learn the details and decide you’re not interested. But if you don’t let us know you exist, neither of those things can even begin to happen, and these important roles at the dawn of a new field will be less likely to be filled with strong candidates.

I’m especially passionate about de-lurking of this sort because when I first learned about MIRI, I just assumed I wasn’t qualified to help out, and wouldn’t want to, anyway. But after speaking to some folks at MIRI, it turned out I really could help out, and I’m glad I did. (I was MIRI’s Executive Director for ~3.5 years.)

So if you’ve been reading and thinking about long-term AI safety issues for a while now, and you have some expertise in computer science, AI, analytic/formal philosophy, mathematics, statistics, policy, risk analysis, forecasting, or economics, and you’re not already in contact with the people at the organizations I named above, please step forward and tell us you exist.

UPDATE Jan. 2, 2016: At this point in the original post, I recommended that people de-lurk by emailing me or by commenting below. However, I was contacted by far more people than I expected (100+), so I had to reply to everyone (on Dec. 19th) with a form email instead. In that email I thanked everyone for contacting me as I had requested, apologized for not being able to respond individually, and made the following request:

If you think you might be interested in a job related to long-term AI safety either now or in the next couple years, please fill out this 3-question Google form, which is a lot easier than filling out any of the posted job applications. This will make it much easier for the groups that are hiring to skim through your information and decide which people they want to contact and learn more about.

Everyone who contacted/contacts me after Dec. 19th will instead receive a link to this section of this blog post. If I’ve linked you here, please consider filling out the 3-question Google form above.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

My biggest complaint about Last.fm

I use last.fm to track what music I listen to. Unfortunately, it’s not very accurate.

The first problem is that it doesn’t track music listened to on most online services (e.g. Youtube, Bandcamp). But I can’t really complain about that, since I just discovered there’s an app for that. Though, its Youtube support is shaky, I assume because it’s hard to tell what’s a non-musical video and what is a music track.

A bigger problem for me is that last.fm counts up what I listen to by counting tracks played rather than by counting time played. So if I listen to a punk band for one hour, and then I listen to Miles Davis for one hour, last.fm will make it look as though I like the punk band 10x more than I like Miles Davis, because the punk band writes 3 minute tracks and Miles Davis records 30 minute tracks.

A comparison of Mac and cloud programs for PDF rich text extraction

I like reading things via the Kindle app on my phone, because then I can read from anywhere. Unfortunately, most of what I want to read is in PDF format, so the text can’t “reflow” on my phone’s small screen like a normal ebook does. PDF text extraction programs aim to solve this problem by extracting the text (and in some cases, other elements) from a PDF and exporting it to a format that allows text reflow, for example .docx or .epub.

Which PDF text extraction program is best? I couldn’t find any credible comparisons, so I decided to do my own.

My criteria for were:

  1. The program must run on Mac OS X or run in the cloud.
  2. It must be free or have a free trial available, so I can run this test without spending hundreds of dollars.
  3. It must be easy to use. If I have to install special packages or tweak environment variables to run the program, it doesn’t qualify.
  4. It must preserve images, tables, and equations amidst the text, since the documents I want to read often include important charts, tables, and equations. (It’s fine if equations and tables are simply handled as images.)
  5. It must be able to handle multi-column pages.
  6. It must work with English, but I don’t care about other languages because I can’t read them anyway.
  7. I don’t care that much about final file size or how long the conversion takes, so long as the program doesn’t crash on 1 out of every 10 attempts and doesn’t create crazy 200mb files or something like that.

To run my test, I assembled a gauntlet of 16 PDFs of the sort I often read, including several PDFs from journal websites, a paper from arXiv, and multiple scanned-and-OCRed academic book chapters.

A quick search turned up way too many Mac or cloud-based programs to test, so I decided to focus on a few that were from major companies or were particularly easy to use.

[Read more…]

Some practical obstacles to becoming a fan of modern classical music

I think there is a ton of modern classical music (MCM) that listeners would enjoy if they had a way to more cheaply discover it.

Why can’t people cheaply discover MCM they like, and what can be done about it? Below are some guesses.

 

Obstacle 1: There are almost no full-time critics of classical/MCM music.

According to this post, there may be about a dozen full-time critics of classical/MCM music in the USA. This is probably more a symptom of the difficulty of exploring MCM than a cause, but it helps perpetuate the problem. The best solution to this is probably to increase demand for MCM critics by fixing the problems below (and perhaps others I’ve missed).

Obstacle 2: MCM critics do not rate works/albums or give them genre labels.

This one really bugs me. Honestly, how is someone with limited time supposed to navigate the world of MCM if nobody will tell them which works and albums are the best ones, and roughly what they sound like? Sure, this information is sometimes (but not always!) buried somewhere in reviews of new works or albums, but it needs to be at the top of the review, right under the artist and work/album title. Yeah, yeah, musical quality can’t be reduced to a single number and a list of genre tags, blah blah blah GET OVER IT and BE MORE HELPFUL.

Obstacle 3: Because of obstacle #2, there’s no way to aggregate expert opinion on MCM works.

Rock/pop fans have MetacriticAOTY, etc. This is impossible for MCM because MCM critics do not rate albums.

Obstacle 4: MCM critics don’t even make lists.

Even if critics didn’t rate every album they reviewed, they could at least make year-end “best of” lists and genre-specific “best of” lists. But MCM critics almost never do this. Seriously, how is anyone with limited time supposed to navigate MCM without listicles?

Obstacle 5: Many MCM works aren’t recorded for years after their debut.

Suppose you read a review of a new rock/pop album, and you want to hear it. What do you do? You stream it on Spotify or buy it on iTunes.

But suppose you read a review of a new MCM work, and you want to hear it. What do you do? The answer is probably “buy a plane ticket to another city on a specific date and pay $80 to hear it once in a concert hall, otherwise wait 1-15 years for the work to be recorded and released and hope you remember to listen to it then.” To someone used to the rock/pop world, this is utter madness.

To become more consumable by a mass-ish market, MCM works need to be recorded and released first, and performed for the public later, after people have had a chance to stream/buy it and decide whether they want to endure the cost and inconvenience of seeing it live.

Unfortunately, I don’t know enough about the MCM business to know how to fix this.

 

Not on this list

Conspicuously missing from my list of obstacles is “most MCM composers write unlistenable random noise.” They do, of course, but I don’t see that as a problem.

“Unlistenable random noise” is hyperbole, I know, except for some pieces like John Cage’s HPSCHD. What I mean is that most MCM composers tend to write music that sounds to most people like “unlistenable random noise.” As Joe Queenan put it,

During a radio interview between acts at the Metropolitan Opera in New York, a famous singer recently said she could not understand why audiences were so reluctant to listen to new music, given that they were more than ready to attend sporting events whose outcome was uncertain. It was a daft analogy… when Spain plays Germany, everyone knows that the game will be played with one ball, not eight; and that the final score will be 1-0 or 3-2 or even 8-1 – but definitely not 1,600,758 to Arf-Arf the Chalet Ate My Banana. The public may not know in advance what the score will be, but it at least understands the rules of the game.

… It is not the composers’ fault that they wrote uncompromising music… but it is not my fault that I would rather listen to Bach.

If the obstacles listed above were fixed (plus some I probably missed), then MCM would be in the same place rock/pop music is: composers can write whatever they want, including unlistenable random noise, and some of it will find a big audience, most of it won’t, and that’s fine.

The critical importance of good headphones

If you’re exploring musical styles, e.g. via my guides to contemporary classical and modern art jazzremember to get some good headphones. This is obvious in retrospect but often neglected. When I switched from cheap to good headphones years ago, I realized there were entire instruments in my favorite songs that I hadn’t been hearing. When my boss Holden finally got good headphones, he started to like minimalism much more than he had previously. Seriously, get some decent headphones.

Which headphones, exactly? Probably just get whatever The Wirecutter recommends for your style: on-ear vs. over-ear vs. in-ear, wireless vs. wired, exercise vs. normal, etc.

You’ll hear a big difference between e.g. the default iPhone headphones and something that costs $70, and you’ll probably hear another difference between a $70 set and a $300 set, but I’m skeptical that most people can hear a difference beyond $300 (for headphones).

If you’re very cost conscious, go with the currently-$23 in-ear headphones here or the currently-$85 over-ear headphones here.

If you’re less cost conscious, go with either the wired or Bluetooth noise-canceling over-ear options here. (I suspect everyone can tell the difference between active noise-canceling and no noise-canceling, but few people can tell the difference between the very good sound of the best noise-canceling headphones and the absolute best sound quality available from non-noise-canceling headphones.)

Dietterich and Horvitz on AI risk

Tom Dietterich and Eric Horvitz have a new opinion piece in Communications of the ACM: Rise of Concerns about AI. Below, I comment on a few passages from the article.

 

Several of these speculations envision an “intelligence chain reaction,” in which an AI system is charged with the task of recursively designing progressively more intelligent versions of itself and this produces an “intelligence explosion.”

I suppose you could “charge” an advanced AI with the task of undergoing an intelligence explosion, but that seems like an incredibly reckless thing for someone to do. More often, the concern is about intelligence explosion as a logical byproduct of the convergent instrumental goal for self-improvement. Nearly all possible goals are more likely to be achieved if the AI can first improve its capabilities, whether the goal is calculating digits of Pi or optimizing a manufacturing process. This is the argument given in the book Dietterich and Horvitz cite for these concerns: Nick Bostrom’s Superintelligence.

 

[Intelligence explosion] runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning…

I follow this literature pretty closely, and I haven’t heard of this result. No citation is provided, so I don’t know what they’re talking about. I doubt this is the kind of thing you can show using computational complexity theory, given how under-specified the concept of intelligence explosion is.

Fortunately, Dietterich and Horvitz do advocate several lines of research to make AI systems more safe and secure, and they also say:

we believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Pedro Domingos on AI risk

Pedro Domingos, an AI researcher at the University of Washington and the author of The Master Algorithm, on the podcast Talking Machines:

There are these fears that computers are going to get very smart and then suddenly they’ll become conscious and they’ll take over the world and enslave us or destroy us like the Terminator. This is completely absurd. But even though it’s completely absurd, you see a lot of it in the media these days…

Domingos doesn’t identify which articles he’s talking about, but nearly all the articles like this that I’ve seen lately are inspired by comments on AI risk from Stephen Hawking, Elon Musk, and Bill Gates, which in turn are (as far as I know) informed by Nick Bostrom’s Superintelligence.

None of these people, as far as I know, have expressed a concern that machines will suddenly become conscious and then take over the world. Rather, these people are concerned with the risks posed by extreme AI competence, as AI scientist Stuart Russell explains.

I don’t know what source Domingos read that talked about machines suddenly becoming conscious and taking over the world. I don’t think I’ve seen such scenarios described outside of fiction.

Anyway, in his book, Domingos does seem to be familiar with the competence concern:

The point where we could turn off all our computers without causing the collapse of modern civilization has long passed. Machine learning is the last straw: if computers can start programming themselves, all hope of controlling them is surely lost. Distinguished scientists like Stephen Hawking have called for urgent research on this issue before it’s too late.

Relax. The chances that an AI equipped with the [ultimate machine learning algorithm] will take over the world are zero. The reason is simple: unlike humans, computers don’t have a will of their own. They’re products of engineering, not evolution. Even an infinitely powerful computer would still be only an extension of our will and nothing to fear…

[AI systems] can vary what they do, even come up with surprising plans, but only in service of the goals we set them. A robot whose programmed goal is “make a good dinner” may decide to cook a steak, a bouillabaisse, or even a delicious new dish of its own creation, but it can’t decide to murder its owner any more than a car can decide to fly away.

…[The] biggest worry is that, like the proverbial genie, the machines will give us what we ask for instead of what we want. This is not a hypothetical scenario; learning algorithms do it all the time. We train a neural network to recognize horses, but it learns instead to recognize brown patches, because all the horses in its training set happened to be brown.

I was curious to see what his rebuttal to the competence concern (“machines will give us what we ask for instead of what we want”) was, but this section just ends with:

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.

Which isn’t very clarifying, especially since elsewhere he writes that “any sufficiently advanced AI is indistinguishable from God.”

The next section is about Kurzweil’s law of accelerating returns, and doesn’t seem to address the competence concern.

So… I guess I can’t tell why Domingos thinks the chances of a global AI catastrophe are “zero,” and I can’t tell what he thinks of the basic competence concern expressed by Hawking, Musk, Gates, Russell, Bostrom, etc.

Update 05-09-2016: For additional Domingos comments on risks from advanced AI, see this episode of EconTalk, starting around minute 50.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Unabashedly emotional or catchy avant-garde music

Holden wrote me a fictional conversation to illustrate his experience of trying to find music that is (1) complex, (2) structurally interesting, and yet (3) listenable / emotional / catchy (at least in some parts):

Holden: I’m bored by pop music. Got anything interesting?

Person: Here, try this 7-second riff played repeatedly for 26 minutes.

Holden: Umm … but what about … something a little more varied?

Person: Check out 38 minutes of somebody screaming incoherently while 5 incompatible instruments play random notes and a monotone voice recites surreal poetry.

Holden: But like … uh … more listenable maybe?

Person: I thought you didn’t want pop bullshit. Well, here’s something middlebrow: a guy playing 3 chords on a guitar who sounds kind of sarcastic.

Holden’s three criteria describe a great deal of my favorite music, much of which is scattered throughout my guides to modern classical music and modern art jazz. So if those criteria sound good to you, too, then I’ve listed below a few musical passages you might like.

Osvaldo Golijov, The Dreams and Prayers of Isaac the Blind, I. Agitato, 4:53-7:45

A string quartet + clarinet piece with a distinctly Jewish sound which, in this passage, sounds to me like a scene of building tension and frantic activity until all falls away (6:23) and the clarinet screams prayers of desperation to God (6:59).

Carla Bley, Escalator Over the Hill, Hotel Overture, 6:26-10:30

A circus-music refrain ambles along until things slow down (7:40) and a sax begins to solo (7:45) over repeating fatalistic-sounding chords in a way that, like the clarinet in the passage above, sounds to me like a cry of desperation, one with a cracking voice (e.g. at 8:03 & 8:09), and, at times, non-tonal gargled screaming (8:32), finally fading back into earlier themes from the overture (9:45).

Arvo Pärt, Tabula Rasa, Ludus, 4:10-9:52

Violins swirl around chords that seem to endlessly descend until a period of relative quiet (4:50) accented by bells. The earlier pattern returns (5:26), eventually picking up pace (5:44), until a momentary return to the calm of the bells (6:14). Then another return to the swirling violins (6:55), which again pick up their pace but this time with a thundering crash (7:15) that foreshadows the destruction that lies ahead. The violins ascend to a peak (7:55), and then quiver as they fall — farther and farther — until booming chords (8:44) announce the final desperate race (8:49) to the shattering end (9:36). If this doesn’t move you, you might be dead.

Sergey Kuryokhin, The Sparrow Oratorium, Summer, 0:55-4:36

Squeaky strings wander aimlessly until the piece suddenly jumps into a rollicking riff (1:11) that will be repeated throughout the piece. Variations on this riff continue as a high-gain guitar plays a free jazz solo. The solo ends (2:30), the noise builds, and then suddenly transitions (2:46) to a silly refrain of “zee zee zee zee…” and other vocalizations and then (3:17) a female pop singer with a soaring chorus that bleeds into (4:05) a variation on the original riff with sparse instrumentation which then launches into a louder, fuller-sounding version of the riff (4:20). (To me, this track is more catchy than emotional.)

John Adams, Harmonielehre, 1st movement, 12:19-16:18

Melancholy strings descend, but there is tension in the mood, announced by an ominous trill (12:45), and then another (12:51). But then, the mood lifts with piano and woodwinds (13:03) repeating an optimistic chord. The music accelerates, and takes another tonal shift toward a tense alert (13:22). Booming brass and drums enter (13:41) as things continue to accelerate, and the drums and brass strike again (14:29) and drag the whole piece down with them, in pitch and pace. The strings and horns struggle to rise again until the horns soar free (15:11) . The instruments rise and accelerate again until they break through to the upper atmosphere (15:32). Then they pull back, as if they see something up ahead, and… BOOM (16:04) there are the thundering E minor chords the opened the piece, here again to close the movement.

Artistic greatness, according to my brain: a first pass

Why do some pieces of music, or film, or visual art strike me as “great works of art,” independent of how much I enjoy them? When I introspect on this, and when I test my brain’s mostly subconsious “artistic greatness function” against a variety of test cases, it seems to me that the following features play a major role in my artistic greatness judgments:

  1. Innovation
  2. Cohesion
  3. Execution
  4. Emotionality

Below is a brief sketch of what I mean by each of these. In the future, I’ll elaborate on various points, and run additional thought experiments as I try to work toward reflective equilibrium for my aesthetic judgments.

[Read more…]

Anthony Braxton albums

I recently listened to 70 albums by Anthony Braxton, several of them very long, between 2 and 12 CDs. I thought a few of his albums were pretty great as works of art, but I enjoyed exactly one of them (Eugene), and I thought zero of them were accessible enough to include in my guide.

I should also mention that I basically only listened to the albums with original compositions on them. And if you know Braxton, you know these compositions are complicated — e.g. Composition 82, scored for simultaneous performance by four different orchestras. The dude is prolific.

To see which Braxton albums I listened to, Ctrl+F on my (in-progress) jazz guide for “Anthony Braxton’s” (it’s in a footnote).

 

Elon Musk on saving the world

On The Late Show, Stephen Colbert asked Elon Musk if he was trying to save the world. The obvious, transparent answer is “yes.” But Elon’s reply was “Well, I’m trying to do useful things.” Perhaps Elon’s PR person told him that trying to save the world comes off as arrogant, even if you’re a billionaire.

 

“Fewer Data, Please”

The BMJ has some neat features, such as paper-specific “instant responses” and published peer-review correspondence.

The latter feature allowed me to discover that in their initial “revise and resubmit” comments on a recent meta-analysis on sugar-sweetened beverages and type 2 diabetes, the BMJ manuscript committee requested the study’s authors to provide fewer data:

There is a very large number of supplementary files, tables and diagrams. It would be helpful if these could be reduced to the most important and essential supplementary items.

What? Why? Are they going to run out of server space? Give me ALL THE DATA! Finding the data I want in a huge 30mb supplementary data file is still much easier than asking the corresponding author for it 3 years later.

Is this kind of reviewer feedback common? I thought the top journals were generally on board with the open science trend.

Powerful musical contrasts

For a couple months I’d been listening almost exclusively to jazz music, while working on my jazz guide. Then on a whim I decided to listen to a song I hadn’t heard in a long time, Smashing Pumpkins’ brutal rocker “Geek USA,” and it absolutely blew my mind, as if I was listening to the first piece of rock music invented, in a world that had only previously known folk, classical, and jazz.

The experience reminded of my favorite scene from Back to the Future, where Marty — who has traveled back in time to 1955 — ends a 1950s rock-and-roll classic with an increasingly intense guitar solo that completely bewilders the 1950s crowd that has never heard hard rock virtuoso guitar before:

[Read more…]

Reply to Ng on AI risk

On a recent episode of the excellent Talking Machines podcast, guest Andrew Ng — one of the big names in deep learning — discussed long-term AI risk (starting at 32:35):

Ng: …There’s been this hype about AI superintelligence and evil robots taking over the world, and I think I don’t worry about that for the same reason I don’t worry about overpopulation on Mars… we haven’t set foot on the planet, and I don’t know how to productively work on that problem. I think AI today is becoming much more intelligent [but] I don’t see a realistic path right now for AI to become sentient — to become self-aware and turn evil and so on. Maybe, hundreds of years from now, someone will invent a new technology that none of us have thought of yet that would enable an AI to turn evil, and then obviously we have to act at that time, but for now, I just don’t see a way to productively work on the problem.

And the reason I don’t like the hype about evil killer robots and AI superintelligence is that I think it distracts us from a much more serious conversation about the challenge that technology poses, which is the challenge to labor…

Both Ng and the Talking Machines co-hosts talk as though Ng’s view is the mainstream view in AI, but — with respect to AGI timelines, at least — it isn’t.

In this podcast and elsewhere, Ng seems somewhat confident (>35%, maybe?) that AGI is “hundreds of years” away. This is somewhat out of sync with the mainstream of AI. In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 90% confident AGI will be built” was 2070. The median response for 50% confidence of AGI was 2050.

That’s a fairly large difference of opinion between the median top-notch AI scientist and Andrew Ng. Their probability distributions barely overlap at all (probably).

Of course if I was pretty confident that AGI was hundreds of years away, I would also suggest prioritizing other areas, plausibly including worries about technological unemployment. But as far as we can tell, very few top-notch AI scientists agree with Ng that AGI is probably more than a century away.

That said, I do think that most top-notch AI scientists probably would agree with Ng that it’s too early to productively tackle the AGI safety challenge, even though they’d disagree with him on AGI timelines. I think these attitudes — about whether there is productive work on the topic to be done now — are changing, but slowly.

I will also note that Ng doesn’t seem to understand the AI risks that people are concerned about. Approximately nobody is worried that AI is going to “become self-aware” and then “turn evil,” as I’ve discussed before.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Etzioni on AI risk

Back in December 2014, AI scientist Oren Etzioni wrote an article called “AI Won’t Exterminate Us — it Will Empower Us.” He opens by quoting the fears of Musk and Hawking, and then says he’s not worried. Why not?

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and… beat humans at their own game.

But of course, the people talking about AI as a potential existential risk aren’t worried about AIs creating their own goals, either. Instead, the problem is that an AI optimizing very competently for the goals we gave it presents a threat to our survival. (For details, read just about anything on the topic that isn’t a news story, from Superintelligence to Wait But Why to Wikipedia, or watch this talk by Stuart Russell.)

Etzioni continues:

…the emergence of “full artificial intelligence” over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us.

First, most of the people concerned about AI as a potential extinction risk don’t think “full artificial intelligence” (aka AGI) will arrive in the next 25 years, either.

Second, I think most of Etzioni’s colleagues in AI would disagree with his claim that the arrival of AGI within 25 years is “far less likely than an asteroid striking the earth and annihilating us” (in the same 25-year time horizon).

Step one: what do AI scientists think about the timing of AGI? In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 10% confident AGI will be built” was 2024. The median response for 50% confidence of AGI was 2050. So, top-of-the-field AI researchers tend to be somewhere between 10% and 50% confident that AGI will be built within Etzioni’s 25-year timeframe.

Step two: how likely is it that an asteroid will strike Earth and annihilate us in the next 25 years? The nice thing about this prediction is that we actually know quite a lot about how frequently large asteroids strike Earth. We have hundreds of millions of years’ worth of data. And even without looking at that data, we know that an asteroid large enough to “annihilate us” hasn’t struck Earth throughout all of primate history — because if it had, we wouldn’t be here! Also, NASA conducted a pretty thorough search for nearby asteroids a while back, and — long story short — they’re pretty confident they’ve identified all the civilization-ending asteroids nearby, and none of them are going to hit Earth. The probability of an asteroid annihilating us in the next 25 years is much, much smaller than 1%.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)