My biggest complaint about Last.fm

I use last.fm to track what music I listen to. Unfortunately, it’s not very accurate.

The first problem is that it doesn’t track music listened to on most online services (e.g. Youtube, Bandcamp). But I can’t really complain about that, since I just discovered there’s an app for that. Though, its Youtube support is shaky, I assume because it’s hard to tell what’s a non-musical video and what is a music track.

A bigger problem for me is that last.fm counts up what I listen to by counting tracks played rather than by counting time played. So if I listen to a punk band for one hour, and then I listen to Miles Davis for one hour, last.fm will make it look as though I like the punk band 10x more than I like Miles Davis, because the punk band writes 3 minute tracks and Miles Davis records 30 minute tracks.

A comparison of Mac and cloud programs for PDF rich text extraction

I like reading things via the Kindle app on my phone, because then I can read from anywhere. Unfortunately, most of what I want to read is in PDF format, so the text can’t “reflow” on my phone’s small screen like a normal ebook does. PDF text extraction programs aim to solve this problem by extracting the text (and in some cases, other elements) from a PDF and exporting it to a format that allows text reflow, for example .docx or .epub.

Which PDF text extraction program is best? I couldn’t find any credible comparisons, so I decided to do my own.

My criteria for were:

  1. The program must run on Mac OS X or run in the cloud.
  2. It must be free or have a free trial available, so I can run this test without spending hundreds of dollars.
  3. It must be easy to use. If I have to install special packages or tweak environment variables to run the program, it doesn’t qualify.
  4. It must preserve images, tables, and equations amidst the text, since the documents I want to read often include important charts, tables, and equations. (It’s fine if equations and tables are simply handled as images.)
  5. It must be able to handle multi-column pages.
  6. It must work with English, but I don’t care about other languages because I can’t read them anyway.
  7. I don’t care that much about final file size or how long the conversion takes, so long as the program doesn’t crash on 1 out of every 10 attempts and doesn’t create crazy 200mb files or something like that.

To run my test, I assembled a gauntlet of 16 PDFs of the sort I often read, including several PDFs from journal websites, a paper from arXiv, and multiple scanned-and-OCRed academic book chapters.

A quick search turned up way too many Mac or cloud-based programs to test, so I decided to focus on a few that were from major companies or were particularly easy to use.

[Read more…]

Some practical obstacles to becoming a fan of modern classical music

I think there is a ton of modern classical music (MCM) that listeners would enjoy if they had a way to more cheaply discover it.

Why can’t people cheaply discover MCM they like, and what can be done about it? Below are some guesses.

 

Obstacle 1: There are almost no full-time critics of classical/MCM music.

According to this post, there may be about a dozen full-time critics of classical/MCM music in the USA. This is probably more a symptom of the difficulty of exploring MCM than a cause, but it helps perpetuate the problem. The best solution to this is probably to increase demand for MCM critics by fixing the problems below (and perhaps others I’ve missed).

Obstacle 2: MCM critics do not rate works/albums or give them genre labels.

This one really bugs me. Honestly, how is someone with limited time supposed to navigate the world of MCM if nobody will tell them which works and albums are the best ones, and roughly what they sound like? Sure, this information is sometimes (but not always!) buried somewhere in reviews of new works or albums, but it needs to be at the top of the review, right under the artist and work/album title. Yeah, yeah, musical quality can’t be reduced to a single number and a list of genre tags, blah blah blah GET OVER IT and BE MORE HELPFUL.

Obstacle 3: Because of obstacle #2, there’s no way to aggregate expert opinion on MCM works.

Rock/pop fans have MetacriticAOTY, etc. This is impossible for MCM because MCM critics do not rate albums.

Obstacle 4: MCM critics don’t even make lists.

Even if critics didn’t rate every album they reviewed, they could at least make year-end “best of” lists and genre-specific “best of” lists. But MCM critics almost never do this. Seriously, how is anyone with limited time supposed to navigate MCM without listicles?

Obstacle 5: Many MCM works aren’t recorded for years after their debut.

Suppose you read a review of a new rock/pop album, and you want to hear it. What do you do? You stream it on Spotify or buy it on iTunes.

But suppose you read a review of a new MCM work, and you want to hear it. What do you do? The answer is probably “buy a plane ticket to another city on a specific date and pay $80 to hear it once in a concert hall, otherwise wait 1-15 years for the work to be recorded and released and hope you remember to listen to it then.” To someone used to the rock/pop world, this is utter madness.

To become more consumable by a mass-ish market, MCM works need to be recorded and released first, and performed for the public later, after people have had a chance to stream/buy it and decide whether they want to endure the cost and inconvenience of seeing it live.

Unfortunately, I don’t know enough about the MCM business to know how to fix this.

 

Not on this list

Conspicuously missing from my list of obstacles is “most MCM composers write unlistenable random noise.” They do, of course, but I don’t see that as a problem.

“Unlistenable random noise” is hyperbole, I know, except for some pieces like John Cage’s HPSCHD. What I mean is that most MCM composers tend to write music that sounds to most people like “unlistenable random noise.” As Joe Queenan put it,

During a radio interview between acts at the Metropolitan Opera in New York, a famous singer recently said she could not understand why audiences were so reluctant to listen to new music, given that they were more than ready to attend sporting events whose outcome was uncertain. It was a daft analogy… when Spain plays Germany, everyone knows that the game will be played with one ball, not eight; and that the final score will be 1-0 or 3-2 or even 8-1 – but definitely not 1,600,758 to Arf-Arf the Chalet Ate My Banana. The public may not know in advance what the score will be, but it at least understands the rules of the game.

… It is not the composers’ fault that they wrote uncompromising music… but it is not my fault that I would rather listen to Bach.

If the obstacles listed above were fixed (plus some I probably missed), then MCM would be in the same place rock/pop music is: composers can write whatever they want, including unlistenable random noise, and some of it will find a big audience, most of it won’t, and that’s fine.

The critical importance of good headphones

If you’re exploring musical styles, e.g. via my guides to contemporary classical and modern art jazzremember to get some good headphones. This is obvious in retrospect but often neglected. When I switched from cheap to good headphones years ago, I realized there were entire instruments in my favorite songs that I hadn’t been hearing. When my boss Holden finally got good headphones, he started to like minimalism much more than he had previously. Seriously, get some decent headphones.

Which headphones, exactly? Probably just get whatever The Wirecutter recommends for your style: on-ear vs. over-ear vs. in-ear, wireless vs. wired, exercise vs. normal, etc.

You’ll hear a big difference between e.g. the default iPhone headphones and something that costs $70, and you’ll probably hear another difference between a $70 set and a $300 set, but I’m skeptical that most people can hear a difference beyond $300 (for headphones).

If you’re very cost conscious, go with the currently-$23 in-ear headphones here or the currently-$85 over-ear headphones here.

If you’re less cost conscious, go with either the wired or Bluetooth noise-canceling over-ear options here. (I suspect everyone can tell the difference between active noise-canceling and no noise-canceling, but few people can tell the difference between the very good sound of the best noise-canceling headphones and the absolute best sound quality available from non-noise-canceling headphones.)

Dietterich and Horvitz on AI risk

Tom Dietterich and Eric Horvitz have a new opinion piece in Communications of the ACM: Rise of Concerns about AI. Below, I comment on a few passages from the article.

 

Several of these speculations envision an “intelligence chain reaction,” in which an AI system is charged with the task of recursively designing progressively more intelligent versions of itself and this produces an “intelligence explosion.”

I suppose you could “charge” an advanced AI with the task of undergoing an intelligence explosion, but that seems like an incredibly reckless thing for someone to do. More often, the concern is about intelligence explosion as a logical byproduct of the convergent instrumental goal for self-improvement. Nearly all possible goals are more likely to be achieved if the AI can first improve its capabilities, whether the goal is calculating digits of Pi or optimizing a manufacturing process. This is the argument given in the book Dietterich and Horvitz cite for these concerns: Nick Bostrom’s Superintelligence.

 

[Intelligence explosion] runs counter to our current understandings of the limitations that computational complexity places on algorithms for learning and reasoning…

I follow this literature pretty closely, and I haven’t heard of this result. No citation is provided, so I don’t know what they’re talking about. I doubt this is the kind of thing you can show using computational complexity theory, given how under-specified the concept of intelligence explosion is.

Fortunately, Dietterich and Horvitz do advocate several lines of research to make AI systems more safe and secure, and they also say:

we believe scholarly work is needed on the longer-term concerns about AI. Working with colleagues in economics, political science, and other disciplines, we must address the potential of automation to disrupt the economic sphere. Deeper study is also needed to understand the potential of superintelligence or other pathways to result in even temporary losses of control of AI systems. If we find there is significant risk, then we must work to develop and adopt safety practices that neutralize or minimize that risk.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Pedro Domingos on AI risk

Pedro Domingos, an AI researcher at the University of Washington and the author of The Master Algorithm, on the podcast Talking Machines:

There are these fears that computers are going to get very smart and then suddenly they’ll become conscious and they’ll take over the world and enslave us or destroy us like the Terminator. This is completely absurd. But even though it’s completely absurd, you see a lot of it in the media these days…

Domingos doesn’t identify which articles he’s talking about, but nearly all the articles like this that I’ve seen lately are inspired by comments on AI risk from Stephen Hawking, Elon Musk, and Bill Gates, which in turn are (as far as I know) informed by Nick Bostrom’s Superintelligence.

None of these people, as far as I know, have expressed a concern that machines will suddenly become conscious and then take over the world. Rather, these people are concerned with the risks posed by extreme AI competence, as AI scientist Stuart Russell explains.

I don’t know what source Domingos read that talked about machines suddenly becoming conscious and taking over the world. I don’t think I’ve seen such scenarios described outside of fiction.

Anyway, in his book, Domingos does seem to be familiar with the competence concern:

The point where we could turn off all our computers without causing the collapse of modern civilization has long passed. Machine learning is the last straw: if computers can start programming themselves, all hope of controlling them is surely lost. Distinguished scientists like Stephen Hawking have called for urgent research on this issue before it’s too late.

Relax. The chances that an AI equipped with the [ultimate machine learning algorithm] will take over the world are zero. The reason is simple: unlike humans, computers don’t have a will of their own. They’re products of engineering, not evolution. Even an infinitely powerful computer would still be only an extension of our will and nothing to fear…

[AI systems] can vary what they do, even come up with surprising plans, but only in service of the goals we set them. A robot whose programmed goal is “make a good dinner” may decide to cook a steak, a bouillabaisse, or even a delicious new dish of its own creation, but it can’t decide to murder its owner any more than a car can decide to fly away.

…[The] biggest worry is that, like the proverbial genie, the machines will give us what we ask for instead of what we want. This is not a hypothetical scenario; learning algorithms do it all the time. We train a neural network to recognize horses, but it learns instead to recognize brown patches, because all the horses in its training set happened to be brown.

I was curious to see what his rebuttal to the competence concern (“machines will give us what we ask for instead of what we want”) was, but this section just ends with:

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.

Which isn’t very clarifying, especially since elsewhere he writes that “any sufficiently advanced AI is indistinguishable from God.”

The next section is about Kurzweil’s law of accelerating returns, and doesn’t seem to address the competence concern.

So… I guess I can’t tell why Domingos thinks the chances of a global AI catastrophe are “zero,” and I can’t tell what he thinks of the basic competence concern expressed by Hawking, Musk, Gates, Russell, Bostrom, etc.

Update 05-09-2016: For additional Domingos comments on risks from advanced AI, see this episode of EconTalk, starting around minute 50.

(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)

Unabashedly emotional or catchy avant-garde music

Holden wrote me a fictional conversation to illustrate his experience of trying to find music that is (1) complex, (2) structurally interesting, and yet (3) listenable / emotional / catchy (at least in some parts):

Holden: I’m bored by pop music. Got anything interesting?

Person: Here, try this 7-second riff played repeatedly for 26 minutes.

Holden: Umm … but what about … something a little more varied?

Person: Check out 38 minutes of somebody screaming incoherently while 5 incompatible instruments play random notes and a monotone voice recites surreal poetry.

Holden: But like … uh … more listenable maybe?

Person: I thought you didn’t want pop bullshit. Well, here’s something middlebrow: a guy playing 3 chords on a guitar who sounds kind of sarcastic.

Holden’s three criteria describe a great deal of my favorite music, much of which is scattered throughout my guides to modern classical music and modern art jazz. So if those criteria sound good to you, too, then I’ve listed below a few musical passages you might like.

Osvaldo Golijov, The Dreams and Prayers of Isaac the Blind, I. Agitato, 4:53-7:45

A string quartet + clarinet piece with a distinctly Jewish sound which, in this passage, sounds to me like a scene of building tension and frantic activity until all falls away (6:23) and the clarinet screams prayers of desperation to God (6:59).

Carla Bley, Escalator Over the Hill, Hotel Overture, 6:26-10:30

A circus-music refrain ambles along until things slow down (7:40) and a sax begins to solo (7:45) over repeating fatalistic-sounding chords in a way that, like the clarinet in the passage above, sounds to me like a cry of desperation, one with a cracking voice (e.g. at 8:03 & 8:09), and, at times, non-tonal gargled screaming (8:32), finally fading back into earlier themes from the overture (9:45).

Arvo Pärt, Tabula Rasa, Ludus, 4:10-9:52

Violins swirl around chords that seem to endlessly descend until a period of relative quiet (4:50) accented by bells. The earlier pattern returns (5:26), eventually picking up pace (5:44), until a momentary return to the calm of the bells (6:14). Then another return to the swirling violins (6:55), which again pick up their pace but this time with a thundering crash (7:15) that foreshadows the destruction that lies ahead. The violins ascend to a peak (7:55), and then quiver as they fall — farther and farther — until booming chords (8:44) announce the final desperate race (8:49) to the shattering end (9:36). If this doesn’t move you, you might be dead.

Sergey Kuryokhin, The Sparrow Oratorium, Summer, 0:55-4:36

Squeaky strings wander aimlessly until the piece suddenly jumps into a rollicking riff (1:11) that will be repeated throughout the piece. Variations on this riff continue as a high-gain guitar plays a free jazz solo. The solo ends (2:30), the noise builds, and then suddenly transitions (2:46) to a silly refrain of “zee zee zee zee…” and other vocalizations and then (3:17) a female pop singer with a soaring chorus that bleeds into (4:05) a variation on the original riff with sparse instrumentation which then launches into a louder, fuller-sounding version of the riff (4:20). (To me, this track is more catchy than emotional.)

John Adams, Harmonielehre, 1st movement, 12:19-16:18

Melancholy strings descend, but there is tension in the mood, announced by an ominous trill (12:45), and then another (12:51). But then, the mood lifts with piano and woodwinds (13:03) repeating an optimistic chord. The music accelerates, and takes another tonal shift toward a tense alert (13:22). Booming brass and drums enter (13:41) as things continue to accelerate, and the drums and brass strike again (14:29) and drag the whole piece down with them, in pitch and pace. The strings and horns struggle to rise again until the horns soar free (15:11) . The instruments rise and accelerate again until they break through to the upper atmosphere (15:32). Then they pull back, as if they see something up ahead, and… BOOM (16:04) there are the thundering E minor chords the opened the piece, here again to close the movement.

Artistic greatness, according to my brain: a first pass

Why do some pieces of music, or film, or visual art strike me as “great works of art,” independent of how much I enjoy them? When I introspect on this, and when I test my brain’s mostly subconsious “artistic greatness function” against a variety of test cases, it seems to me that the following features play a major role in my artistic greatness judgments:

  1. Innovation
  2. Cohesion
  3. Execution
  4. Emotionality

Below is a brief sketch of what I mean by each of these. In the future, I’ll elaborate on various points, and run additional thought experiments as I try to work toward reflective equilibrium for my aesthetic judgments.

[Read more…]

Anthony Braxton albums

I recently listened to 70 albums by Anthony Braxton, several of them very long, between 2 and 12 CDs. I thought a few of his albums were pretty great as works of art, but I enjoyed exactly one of them (Eugene), and I thought zero of them were accessible enough to include in my guide.

I should also mention that I basically only listened to the albums with original compositions on them. And if you know Braxton, you know these compositions are complicated — e.g. Composition 82, scored for simultaneous performance by four different orchestras. The dude is prolific.

To see which Braxton albums I listened to, Ctrl+F on my (in-progress) jazz guide for “Anthony Braxton’s” (it’s in a footnote).

 

Elon Musk on saving the world

On The Late Show, Stephen Colbert asked Elon Musk if he was trying to save the world. The obvious, transparent answer is “yes.” But Elon’s reply was “Well, I’m trying to do useful things.” Perhaps Elon’s PR person told him that trying to save the world comes off as arrogant, even if you’re a billionaire.

 

“Fewer Data, Please”

The BMJ has some neat features, such as paper-specific “instant responses” and published peer-review correspondence.

The latter feature allowed me to discover that in their initial “revise and resubmit” comments on a recent meta-analysis on sugar-sweetened beverages and type 2 diabetes, the BMJ manuscript committee requested the study’s authors to provide fewer data:

There is a very large number of supplementary files, tables and diagrams. It would be helpful if these could be reduced to the most important and essential supplementary items.

What? Why? Are they going to run out of server space? Give me ALL THE DATA! Finding the data I want in a huge 30mb supplementary data file is still much easier than asking the corresponding author for it 3 years later.

Is this kind of reviewer feedback common? I thought the top journals were generally on board with the open science trend.

Powerful musical contrasts

For a couple months I’d been listening almost exclusively to jazz music, while working on my jazz guide. Then on a whim I decided to listen to a song I hadn’t heard in a long time, Smashing Pumpkins’ brutal rocker “Geek USA,” and it absolutely blew my mind, as if I was listening to the first piece of rock music invented, in a world that had only previously known folk, classical, and jazz.

The experience reminded of my favorite scene from Back to the Future, where Marty — who has traveled back in time to 1955 — ends a 1950s rock-and-roll classic with an increasingly intense guitar solo that completely bewilders the 1950s crowd that has never heard hard rock virtuoso guitar before:

[Read more…]

Reply to Ng on AI risk

On a recent episode of the excellent Talking Machines podcast, guest Andrew Ng — one of the big names in deep learning — discussed long-term AI risk (starting at 32:35):

Ng: …There’s been this hype about AI superintelligence and evil robots taking over the world, and I think I don’t worry about that for the same reason I don’t worry about overpopulation on Mars… we haven’t set foot on the planet, and I don’t know how to productively work on that problem. I think AI today is becoming much more intelligent [but] I don’t see a realistic path right now for AI to become sentient — to become self-aware and turn evil and so on. Maybe, hundreds of years from now, someone will invent a new technology that none of us have thought of yet that would enable an AI to turn evil, and then obviously we have to act at that time, but for now, I just don’t see a way to productively work on the problem.

And the reason I don’t like the hype about evil killer robots and AI superintelligence is that I think it distracts us from a much more serious conversation about the challenge that technology poses, which is the challenge to labor…

Both Ng and the Talking Machines co-hosts talk as though Ng’s view is the mainstream view in AI, but — with respect to AGI timelines, at least — it isn’t.

In this podcast and elsewhere, Ng seems somewhat confident (>35%, maybe?) that AGI is “hundreds of years” away. This is somewhat out of sync with the mainstream of AI. In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 90% confident AGI will be built” was 2070. The median response for 50% confidence of AGI was 2050.

That’s a fairly large difference of opinion between the median top-notch AI scientist and Andrew Ng. Their probability distributions barely overlap at all (probably).

Of course if I was pretty confident that AGI was hundreds of years away, I would also suggest prioritizing other areas, plausibly including worries about technological unemployment. But as far as we can tell, very few top-notch AI scientists agree with Ng that AGI is probably more than a century away.

That said, I do think that most top-notch AI scientists probably would agree with Ng that it’s too early to productively tackle the AGI safety challenge, even though they’d disagree with him on AGI timelines. I think these attitudes — about whether there is productive work on the topic to be done now — are changing, but slowly.

I will also note that Ng doesn’t seem to understand the AI risks that people are concerned about. Approximately nobody is worried that AI is going to “become self-aware” and then “turn evil,” as I’ve discussed before.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Etzioni on AI risk

Back in December 2014, AI scientist Oren Etzioni wrote an article called “AI Won’t Exterminate Us — it Will Empower Us.” He opens by quoting the fears of Musk and Hawking, and then says he’s not worried. Why not?

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and… beat humans at their own game.

But of course, the people talking about AI as a potential existential risk aren’t worried about AIs creating their own goals, either. Instead, the problem is that an AI optimizing very competently for the goals we gave it presents a threat to our survival. (For details, read just about anything on the topic that isn’t a news story, from Superintelligence to Wait But Why to Wikipedia, or watch this talk by Stuart Russell.)

Etzioni continues:

…the emergence of “full artificial intelligence” over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us.

First, most of the people concerned about AI as a potential extinction risk don’t think “full artificial intelligence” (aka AGI) will arrive in the next 25 years, either.

Second, I think most of Etzioni’s colleagues in AI would disagree with his claim that the arrival of AGI within 25 years is “far less likely than an asteroid striking the earth and annihilating us” (in the same 25-year time horizon).

Step one: what do AI scientists think about the timing of AGI? In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 10% confident AGI will be built” was 2024. The median response for 50% confidence of AGI was 2050. So, top-of-the-field AI researchers tend to be somewhere between 10% and 50% confident that AGI will be built within Etzioni’s 25-year timeframe.

Step two: how likely is it that an asteroid will strike Earth and annihilate us in the next 25 years? The nice thing about this prediction is that we actually know quite a lot about how frequently large asteroids strike Earth. We have hundreds of millions of years’ worth of data. And even without looking at that data, we know that an asteroid large enough to “annihilate us” hasn’t struck Earth throughout all of primate history — because if it had, we wouldn’t be here! Also, NASA conducted a pretty thorough search for nearby asteroids a while back, and — long story short — they’re pretty confident they’ve identified all the civilization-ending asteroids nearby, and none of them are going to hit Earth. The probability of an asteroid annihilating us in the next 25 years is much, much smaller than 1%.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Tabarrok on AI risk

At Marginal Revolution, economist Alex Tabarrok writes:

Stephen Hawking fears that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk and Bill Gates offer similar warnings. Many researchers in artificial intelligence are less concerned primarily because they think that the technology is not advancing as quickly as doom scenarios imagine, as Ramez Naam discussed.

First, remember that Naam quoted only the prestigious AI scientists who agree with him, and conspicuously failed to mention that many prestigious AI scientists past and present have taken AI risk seriously.

Second, the common disagreement is not, primarily, about the timing of AGI. As I’ve explained many times before, the AI timelines of those talking about the long-term risk are not noticeably different from those of the mainstream AI community. (Indeed, both Nick Bostrom and myself, and many others in the risk-worrying camp, have later timelines than the mainstream AI community does.)

But the main argument of Tabarrok’s post is this:

Why should we be worried about the end of the human race? Oh sure, there are some Terminator like scenarios in which many future-people die in horrible ways and I’d feel good if we avoided those scenarios. The more likely scenario, however, is a glide path to extinction in which most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs. A few holdouts to the old ways would remain but birth rates would be low and the non-adapted would be regarded as quaint, as we regard the Amish today. Eventually the last humans would go extinct and 46andMe customers would kid each other over how much of their DNA was of the primitive kind while holo-commercials advertised products “so easy a homo sapiens could do it.”  I see nothing objectionable in this scenario.

The people who write about existential risk at FHI, MIRI, CSER, FLI, etc. tend not to be worried about Tabarrok’s “glide” scenario. Speaking for myself, at least, that scenario seems pretty desirable. I just don’t think it’s very likely, for reasons partially explained in books like SuperintelligenceGlobal Catastrophic Risks, and others.

(Note that although I work as a GiveWell research analyst, I do not study global catastrophic risks or AI for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Buchanan on AI risk

Back in February, The Washington Post posted an opinion article by David Buchanan of the IBM Watson team: “No, the robots are not going to rise up and kill you.”

From the title, you might assume “Okay, I guess this isn’t about the AI risk concerns raised by MIRI, FHI, Elon Musk, etc.” But in the opening paragraph, Buchanan makes clear he is trying to respond to those concerns, by linking here and here.

I am often suspicious that many people in the “nothing to worry about” camp think they are replying to MIRI & company but are actually replying to Hollywood.

And lo, when Buchanan explains the supposed concern about AI, he doesn’t link to anything by MIRI & company, but instead he literally links to IMDB pages for movies/TV about AI:

Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy…

The entire rest of the article is about the consciousness fallacy. But of course, everyone at MIRI and FHI, and probably Musk as well, agrees that intelligence doesn’t automatically create consciousness, and that has never been what MIRI & company are worried about.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Videogames as art

I still basically agree with this 4-minute video essay I produced way back in 2008:

Transcript

When the motion picture was invented, critics considered it an amusing toy. They didn’t see its potential to be an art form like painting or music. But only a few decades later, film was in some ways the ultimate art — capable of passion, lyricism, symbolism, subtlety, and beauty. Film could combine the elements of all other arts — music, literature, poetry, dance, staging, fashion, and even architecture — into it single, awesome work. Of course, film will always be used for silly amusements, but it can also express the highest of art. Film has come of age.

In the 1960s, computer programmers invented another amusing toy: the videogame. Nobody thought it could be a serious art form, and who could blame them? Super Mario Brothers didn’t have much in common with Citizen Kane. And, nobody was even trying to make artistic games. Companies just wanted to make fun play things that would sell lots of copies.

But recently, games have started to look a lot more like the movies, and people wondered: “Could this become a serious art form, like film?” In fact, some games basically were films with tiny better gameplay snuck in.

Of course, there is one major difference between films and games. Film critic Roger Ebert thinks games can never be an art form because

Videogames by their nature require player choices, which is the opposite the strategy of serious film and literature, which requires at authorial control.

But wait a minute. Aren’t there already serious art forms that allow for flexibility, improvisation, and player choices? Bach and Mozart and other composers famously left room for improvisation in their classical compositions. And of course jazz music is an art form based almost entirely on improvisation within a set of scales or modes or ideas. Avant-garde composers Christian Wolff and John Zorn write “game pieces” in which there are no prearranged notes at all. Performers play according to an unfolding set of rules exactly as in baseball or Mario. So gameplay can be art.

Maybe the real reason some people don’t think games are an art form is that they don’t know any artistic video games. Even the games with impressive graphic design and good music have pretty hokey stories and unoriginal drive-jump-shoot gameplay. And for the most part they’re right: there aren’t many artistic games. Games are only just becoming an art form. It took film a while to become art, too.

But maybe the skeptics haven’t played the right games, either. Have they played Shadow of the Colossus, a minimalist epic of beauty and philosophy? Have they played Façade, a one-act play in which the player tries to keep a couple together by listening to their dialogue, reading their facial expressions, and responding in natural language? Have they seen The Night Journey, by respected video artist Bill Viola, which intends to symbolize a mystic’s path towards enlightenment?

It’s an exciting time for video games. They will continue to deliver simple fun and blockbuster entertainment, but there is also an avant-garde movement of serious artists who are about to launch the medium to new heights of expression, and I for one can’t wait to see what they come up with.

F.A.Q. about my transition to GiveWell

Lots of people are asking for more details about my decision to take a job at GiveWell, so I figured I should publish answers to the most common questions I’ve gotten, though I’m happy to also talk about it in person or by email.

Why did you take a job at GiveWell?

Apparently some people think I must have changed my mind about what I think Earth’s most urgent priorities are. So let me be clear: Nothing has changed about what I think Earth’s most urgent priorities are.

I still buy the basic argument in Friendly AI research as effective altruism.

I still think that growing a field of technical AI alignment research, one which takes the future seriously, is plausibly the most urgent task for those seeking a desirable long-term future for Earth-originating life.

And I still think that MIRI has an incredibly important role to play in growing that field of technical AI alignment research.

I decided to take a research position at GiveWell mostly for personal reasons.

I have always preferred research over management. As many of you who know me in person already know, I’ve been looking for my replacement at MIRI since the day I took the Executive Director role, so that I could return to research. When doing research I very easily get into a flow state; I basically never get into a flow state doing management. I’m pretty proud of what the MIRI team accomplished during my tenure, and I could see myself being an executive somewhere again some day, but I want to do something else for a while.

Why not switch to a research role at MIRI? First, I continue to think MIRI should specialize in computer science research that I don’t have the training to do myself. Second, I look forward to upgrading my research skills while working in domains where I don’t already have lots of pre-existing bias.

[Read more…]

Krauss on long-term AI impacts

Physicist Lawrence Krauss says he’s not worried about long-term AI impacts, but he doesn’t respond to any of the standard arguments for concern, so it’s unclear whether he knows much about the topic.

The only argument he gives in any detail has to do with AGI timing:

Given current power consumption by electronic computers, a computer with the storage and processing capability of the human mind would require in excess of 10 Terawatts of power, within a factor of two of the current power consumption of all of humanity. However, the human brain uses about 10 watts of power. This means a mismatch of a factor of 1012, or a million million. Over the past decade the doubling time for Megaflops/watt has been about 3 years. Even assuming Moore’s Law continues unabated, this means it will take about 40 doubling times, or about 120 years, to reach a comparable power dissipation. Moreover, each doubling in efficiency requires a relatively radical change in technology, and it is extremely unlikely that 40 such doublings could be achieved without essentially changing the way computers compute.

Krauss doesn’t say where he got his numbers for the power requirements of “a computer with the storage and processing capability of the human mind,” but there are a few things I can say even leaving that aside.

First, few AI scientists think AGI will be built so similarly to the human brain that having “the storage and processing capability of the human mind” is all that relevant. We didn’t build planes like birds.

Second, Krauss warns that “each doubling in efficiency requires a relatively radical change in technology…” But Koomey’s law — the Moore’s law of computing power efficiency — has been stable since about 1946, which runs through several radical changes in computing technology. Somehow we manage, when there is tremendous economic incentive to do so.

Third, just because the human brain achieves general intelligence with ~10 watts of energy doesn’t mean a computer has to. A machine superintelligence the size of a warehouse is still a challenge to be reckoned with!

Added 08-28-15: Also see Anders Sandberg’s comments on Krauss’ calculations.

Added 02-18-16: Sandberg wrote a version of his comments for arxiv, here.

GSS Tutorial #1: Basic trends over time

Part of the series: How to research stuff.

Today I join Razib Khan’s quest to get bloggers to use the General Social Survey (GSS) more often.

The GSS is a huge collection of data on the demographics and attitudes of non-institutional adults (18+) living in the US. The data were collected by NORC via face-to-face, 90-minute interviews in randomly selected households, every year (almost) from 1972–1994, and every other year since then.

You can download the data and analyze it in R or SPSS or whatever, but the data can also be analyzed very easily via two easy-to-use web interfaces: the UC Berkeley SDA site and the GSS Data Explorer.

[Read more…]