Books, music, etc. from June 2015

Books

I’m not Murray’s intended audience for By the People, but I found it pretty interesting after the first couple chapters, even though I probably agree with Murray about very little in the policy space.

I read the first 1/4 of Barzun’s From Dawn to Decadence. It’s fairly good for what it is, but it’s the “bunch of random cool stories about history” kind of macrohistory, not the data-driven kind of macrohistory I prefer, so I gave up on it.

Music

This month I listened to dozens of jazz albums while working on my in-progress jazz guide. I think I had heard most of them before, but it’s hard to remember which ones. My favorite listens I don’t think I’d heard before were:

Music I particularly enjoyed from other genres:

Movies/TV

I totally loved Inside Out (2015). It’s one of the contenders for best Pixar film ever.

TV’s Wayward Pines is badly written in some ways, but its 5th episode is one of the most satisfying mystery/puzzle resolutions I’ve ever seen. The first four episodes build up a bunch of bizarre mysteries, and then the 5th episode answers most of them in a way that is surprising and rule-constrained and non-arbitrary (e.g. not magic), which is something I see so rarely I can’t even remember the last time I saw it on film/TV.

June 2015 links, round 2

Authorea actually looks pretty awesome for collaborative research paper writing. (So far I’ve been using Overleaf and sometimes… shudder… Google Docs.)

Abstract of a satirical paper from SIVBOVIK 2014:

Besides myriad philosophical disputes, neither [frequentism nor Bayesianism] accurately describes how ordinary humans make inferences… To remedy this problem, we propose belief-sustaining (BS) inference, which makes no use of the data whatsoever, in order to satisfy what we call “the principle of least embarrassment.” This is a much more accurate description of human behavior. We believe this method should replace Bayesian and frequentist inference for economic and public health reasons.

Understanding statistics through interactive visualizations.

GiveWell shallow investigation of risks from atomically precise manufacturing.

My beginner’s guide to modern classical music is basically finished now, and won’t be changing much in the future.

Effective altruist philosophers.

Peter Singer’s Coursera course on effective altruism.

The top 10 mathematical achievements of the last 5ish years, maybe.

Unfortunate statistical terms.

 

AI stuff

Open letter on the digital economy, about tech unemployment etc. Carl Shulman comments.

Robot swordsman.

Robots falling down during the latest DARPA Robotics Challenge.

AI Impacts collected all known public predictions of AGI timing, both individual predictions and survey medians. Conclusions here.

Reply to Ng on AI risk

On a recent episode of the excellent Talking Machines podcast, guest Andrew Ng — one of the big names in deep learning — discussed long-term AI risk (starting at 32:35):

Ng: …There’s been this hype about AI superintelligence and evil robots taking over the world, and I think I don’t worry about that for the same reason I don’t worry about overpopulation on Mars… we haven’t set foot on the planet, and I don’t know how to productively work on that problem. I think AI today is becoming much more intelligent [but] I don’t see a realistic path right now for AI to become sentient — to become self-aware and turn evil and so on. Maybe, hundreds of years from now, someone will invent a new technology that none of us have thought of yet that would enable an AI to turn evil, and then obviously we have to act at that time, but for now, I just don’t see a way to productively work on the problem.

And the reason I don’t like the hype about evil killer robots and AI superintelligence is that I think it distracts us from a much more serious conversation about the challenge that technology poses, which is the challenge to labor…

Both Ng and the Talking Machines co-hosts talk as though Ng’s view is the mainstream view in AI, but — with respect to AGI timelines, at least — it isn’t.

In this podcast and elsewhere, Ng seems somewhat confident (>35%, maybe?) that AGI is “hundreds of years” away. This is somewhat out of sync with the mainstream of AI. In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 90% confident AGI will be built” was 2070. The median response for 50% confidence of AGI was 2050.

That’s a fairly large difference of opinion between the median top-notch AI scientist and Andrew Ng. Their probability distributions barely overlap at all (probably).

Of course if I was pretty confident that AGI was hundreds of years away, I would also suggest prioritizing other areas, plausibly including worries about technological unemployment. But as far as we can tell, very few top-notch AI scientists agree with Ng that AGI is probably more than a century away.

That said, I do think that most top-notch AI scientists probably would agree with Ng that it’s too early to productively tackle the AGI safety challenge, even though they’d disagree with him on AGI timelines. I think these attitudes — about whether there is productive work on the topic to be done now — are changing, but slowly.

I will also note that Ng doesn’t seem to understand the AI risks that people are concerned about. Approximately nobody is worried that AI is going to “become self-aware” and then “turn evil,” as I’ve discussed before.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Etzioni on AI risk

Back in December 2014, AI scientist Oren Etzioni wrote an article called “AI Won’t Exterminate Us — it Will Empower Us.” He opens by quoting the fears of Musk and Hawking, and then says he’s not worried. Why not?

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and… beat humans at their own game.

But of course, the people talking about AI as a potential existential risk aren’t worried about AIs creating their own goals, either. Instead, the problem is that an AI optimizing very competently for the goals we gave it presents a threat to our survival. (For details, read just about anything on the topic that isn’t a news story, from Superintelligence to Wait But Why to Wikipedia, or watch this talk by Stuart Russell.)

Etzioni continues:

…the emergence of “full artificial intelligence” over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us.

First, most of the people concerned about AI as a potential extinction risk don’t think “full artificial intelligence” (aka AGI) will arrive in the next 25 years, either.

Second, I think most of Etzioni’s colleagues in AI would disagree with his claim that the arrival of AGI within 25 years is “far less likely than an asteroid striking the earth and annihilating us” (in the same 25-year time horizon).

Step one: what do AI scientists think about the timing of AGI? In a recent survey of the top-cited living AI scientists in the world, the median response for (paraphrasing) “year by which you’re 10% confident AGI will be built” was 2024. The median response for 50% confidence of AGI was 2050. So, top-of-the-field AI researchers tend to be somewhere between 10% and 50% confident that AGI will be built within Etzioni’s 25-year timeframe.

Step two: how likely is it that an asteroid will strike Earth and annihilate us in the next 25 years? The nice thing about this prediction is that we actually know quite a lot about how frequently large asteroids strike Earth. We have hundreds of millions of years’ worth of data. And even without looking at that data, we know that an asteroid large enough to “annihilate us” hasn’t struck Earth throughout all of primate history — because if it had, we wouldn’t be here! Also, NASA conducted a pretty thorough search for nearby asteroids a while back, and — long story short — they’re pretty confident they’ve identified all the civilization-ending asteroids nearby, and none of them are going to hit Earth. The probability of an asteroid annihilating us in the next 25 years is much, much smaller than 1%.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Some books I’m looking forward to, June 2015 edition

Reply to Tabarrok on AI risk

At Marginal Revolution, economist Alex Tabarrok writes:

Stephen Hawking fears that “the development of full artificial intelligence could spell the end of the human race.” Elon Musk and Bill Gates offer similar warnings. Many researchers in artificial intelligence are less concerned primarily because they think that the technology is not advancing as quickly as doom scenarios imagine, as Ramez Naam discussed.

First, remember that Naam quoted only the prestigious AI scientists who agree with him, and conspicuously failed to mention that many prestigious AI scientists past and present have taken AI risk seriously.

Second, the common disagreement is not, primarily, about the timing of AGI. As I’ve explained many times before, the AI timelines of those talking about the long-term risk are not noticeably different from those of the mainstream AI community. (Indeed, both Nick Bostrom and myself, and many others in the risk-worrying camp, have later timelines than the mainstream AI community does.)

But the main argument of Tabarrok’s post is this:

Why should we be worried about the end of the human race? Oh sure, there are some Terminator like scenarios in which many future-people die in horrible ways and I’d feel good if we avoided those scenarios. The more likely scenario, however, is a glide path to extinction in which most people adopt a variety of bionic and germ-line modifications that over-time evolve them into post-human cyborgs. A few holdouts to the old ways would remain but birth rates would be low and the non-adapted would be regarded as quaint, as we regard the Amish today. Eventually the last humans would go extinct and 46andMe customers would kid each other over how much of their DNA was of the primitive kind while holo-commercials advertised products “so easy a homo sapiens could do it.”  I see nothing objectionable in this scenario.

The people who write about existential risk at FHI, MIRI, CSER, FLI, etc. tend not to be worried about Tabarrok’s “glide” scenario. Speaking for myself, at least, that scenario seems pretty desirable. I just don’t think it’s very likely, for reasons partially explained in books like SuperintelligenceGlobal Catastrophic Risks, and others.

(Note that although I work as a GiveWell research analyst, I do not study global catastrophic risks or AI for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Reply to Buchanan on AI risk

Back in February, The Washington Post posted an opinion article by David Buchanan of the IBM Watson team: “No, the robots are not going to rise up and kill you.”

From the title, you might assume “Okay, I guess this isn’t about the AI risk concerns raised by MIRI, FHI, Elon Musk, etc.” But in the opening paragraph, Buchanan makes clear he is trying to respond to those concerns, by linking here and here.

I am often suspicious that many people in the “nothing to worry about” camp think they are replying to MIRI & company but are actually replying to Hollywood.

And lo, when Buchanan explains the supposed concern about AI, he doesn’t link to anything by MIRI & company, but instead he literally links to IMDB pages for movies/TV about AI:

Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy…

The entire rest of the article is about the consciousness fallacy. But of course, everyone at MIRI and FHI, and probably Musk as well, agrees that intelligence doesn’t automatically create consciousness, and that has never been what MIRI & company are worried about.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

June 2015 links

Men have more hand-grip strength than women (on average), to an even greater degree than I thought.

I now have a Goodreads profile. I’m not going to bother making it exhaustive.

100+ interesting data sets for statistics.

Five silly fonts inspired by mathematical theorems or open problems.

Pre-registration prizes!

Critique of trim-and-fill as a technique for correcting for publication bias.

Interesting recent interview with Ioannidis.

I have begun work on A beginner’s guide to modern art jazz.

Data analysis subcultures.

Scraping for Journalism, a guide by ProPublica.

 

AI stuff

Scott Alexander, “AI researchers on AI risk” and “No time like the present for AI safety work.”

Stuart Russell & others in Nature on autonomous weapons.

MIT’s Cheetah robot now autonomously jumps over obstacles (vide0), and an injured robot learns how to limp.

Scherer’s “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies” discusses AI regulation possibilities in the context of both medium-term and long-term challenges, including superintelligence. I remain agnostic about whether regulation would be helpful at this stage.

(Note that although I work as a GiveWell research analyst, I do not study AI impacts for GiveWell, and my view on this is not necessarily GiveWell’s view.)

Morris’ thesis in Foragers, Farmers, and Fossil Fuels

From the opening of chapter 5:

I suggested that modern human values initially emerged somewhere around 100,000 years ago (±50,000 years) as a consequence of the biological evolution of our big, fast brains, and that once we had our big, fast brains, cultural evolution became a possibility too. Because of cultural evolution, human values have mutated rapidly in the last twenty thousand years, and the pace of change has accelerated in the last two hundred years.

I identified three major stages in human values, which I linked to foraging, farming, and fossil-fuel societies. My main point was that in each case, modes of energy capture determined population size and density, which in turn largely determined which forms of social organization worked best, which went on to make certain sets of values more successful and attractive than others.

Foragers, I observed, overwhelmingly lived in small, low-density groups, and generally saw political and wealth hierarchies as bad things. They were more tolerant of gender hierarchy, and (by modern lights) surprisingly tolerant of violence. Farmers lived in bigger, denser communities, and generally saw steep political, wealth, and gender hierarchies as fine. They had much less patience than foragers, though, for interpersonal violence, and restricted its range of legitimate uses more narrowly. Fossil-fuel folk live in bigger, denser communities still. They tend to see political and gender hierarchy as bad things, and violence as particularly evil, but they are generally more tolerant of wealth hierarchies than foragers, although not so tolerant as farmers.

Books, music, etc. from May 2015

Books

Learn or Die has some good chapters on Bridgewater, the rest is meh.

I read about half of Foragers, Farmers, and Fossil Fuels. The parts I read were good, but I lost interest because the book confirmed for me that we don’t have good evidence about the values of people over most of history, and so even the most well-argued book possible on the subject couldn’t be all that compelling. We have much better evidence for the historical measures used in Morris’ earlier book, The Measure of Civilization.

Consciousness and the Brain has several good chapters, and some chapters that are a bit too excited about the author’s personal favorite theory of consciousness.

The Sixth Extinction was an enjoyable read, but don’t go in expecting any argument.

Favorite tracks or albums discovered this month

Favorite movies discovered this month

Other

Of course the big news for me this month was that I took a new job at GiveWell, leaving MIRI in the capable hands of Nate Soares.

Videogames as art

I still basically agree with this 4-minute video essay I produced way back in 2008:

Transcript

When the motion picture was invented, critics considered it an amusing toy. They didn’t see its potential to be an art form like painting or music. But only a few decades later, film was in some ways the ultimate art — capable of passion, lyricism, symbolism, subtlety, and beauty. Film could combine the elements of all other arts — music, literature, poetry, dance, staging, fashion, and even architecture — into it single, awesome work. Of course, film will always be used for silly amusements, but it can also express the highest of art. Film has come of age.

In the 1960s, computer programmers invented another amusing toy: the videogame. Nobody thought it could be a serious art form, and who could blame them? Super Mario Brothers didn’t have much in common with Citizen Kane. And, nobody was even trying to make artistic games. Companies just wanted to make fun play things that would sell lots of copies.

But recently, games have started to look a lot more like the movies, and people wondered: “Could this become a serious art form, like film?” In fact, some games basically were films with tiny better gameplay snuck in.

Of course, there is one major difference between films and games. Film critic Roger Ebert thinks games can never be an art form because

Videogames by their nature require player choices, which is the opposite the strategy of serious film and literature, which requires at authorial control.

But wait a minute. Aren’t there already serious art forms that allow for flexibility, improvisation, and player choices? Bach and Mozart and other composers famously left room for improvisation in their classical compositions. And of course jazz music is an art form based almost entirely on improvisation within a set of scales or modes or ideas. Avant-garde composers Christian Wolff and John Zorn write “game pieces” in which there are no prearranged notes at all. Performers play according to an unfolding set of rules exactly as in baseball or Mario. So gameplay can be art.

Maybe the real reason some people don’t think games are an art form is that they don’t know any artistic video games. Even the games with impressive graphic design and good music have pretty hokey stories and unoriginal drive-jump-shoot gameplay. And for the most part they’re right: there aren’t many artistic games. Games are only just becoming an art form. It took film a while to become art, too.

But maybe the skeptics haven’t played the right games, either. Have they played Shadow of the Colossus, a minimalist epic of beauty and philosophy? Have they played Façade, a one-act play in which the player tries to keep a couple together by listening to their dialogue, reading their facial expressions, and responding in natural language? Have they seen The Night Journey, by respected video artist Bill Viola, which intends to symbolize a mystic’s path towards enlightenment?

It’s an exciting time for video games. They will continue to deliver simple fun and blockbuster entertainment, but there is also an avant-garde movement of serious artists who are about to launch the medium to new heights of expression, and I for one can’t wait to see what they come up with.

F.A.Q. about my transition to GiveWell

Lots of people are asking for more details about my decision to take a job at GiveWell, so I figured I should publish answers to the most common questions I’ve gotten, though I’m happy to also talk about it in person or by email.

Why did you take a job at GiveWell?

Apparently some people think I must have changed my mind about what I think Earth’s most urgent priorities are. So let me be clear: Nothing has changed about what I think Earth’s most urgent priorities are.

I still buy the basic argument in Friendly AI research as effective altruism.

I still think that growing a field of technical AI alignment research, one which takes the future seriously, is plausibly the most urgent task for those seeking a desirable long-term future for Earth-originating life.

And I still think that MIRI has an incredibly important role to play in growing that field of technical AI alignment research.

I decided to take a research position at GiveWell mostly for personal reasons.

I have always preferred research over management. As many of you who know me in person already know, I’ve been looking for my replacement at MIRI since the day I took the Executive Director role, so that I could return to research. When doing research I very easily get into a flow state; I basically never get into a flow state doing management. I’m pretty proud of what the MIRI team accomplished during my tenure, and I could see myself being an executive somewhere again some day, but I want to do something else for a while.

Why not switch to a research role at MIRI? First, I continue to think MIRI should specialize in computer science research that I don’t have the training to do myself. Second, I look forward to upgrading my research skills while working in domains where I don’t already have lots of pre-existing bias.

[Read more…]

Krauss on long-term AI impacts

Physicist Lawrence Krauss says he’s not worried about long-term AI impacts, but he doesn’t respond to any of the standard arguments for concern, so it’s unclear whether he knows much about the topic.

The only argument he gives in any detail has to do with AGI timing:

Given current power consumption by electronic computers, a computer with the storage and processing capability of the human mind would require in excess of 10 Terawatts of power, within a factor of two of the current power consumption of all of humanity. However, the human brain uses about 10 watts of power. This means a mismatch of a factor of 1012, or a million million. Over the past decade the doubling time for Megaflops/watt has been about 3 years. Even assuming Moore’s Law continues unabated, this means it will take about 40 doubling times, or about 120 years, to reach a comparable power dissipation. Moreover, each doubling in efficiency requires a relatively radical change in technology, and it is extremely unlikely that 40 such doublings could be achieved without essentially changing the way computers compute.

Krauss doesn’t say where he got his numbers for the power requirements of “a computer with the storage and processing capability of the human mind,” but there are a few things I can say even leaving that aside.

First, few AI scientists think AGI will be built so similarly to the human brain that having “the storage and processing capability of the human mind” is all that relevant. We didn’t build planes like birds.

Second, Krauss warns that “each doubling in efficiency requires a relatively radical change in technology…” But Koomey’s law — the Moore’s law of computing power efficiency — has been stable since about 1946, which runs through several radical changes in computing technology. Somehow we manage, when there is tremendous economic incentive to do so.

Third, just because the human brain achieves general intelligence with ~10 watts of energy doesn’t mean a computer has to. A machine superintelligence the size of a warehouse is still a challenge to be reckoned with!

Pinker still confused about AI risk

Ack! Steven Pinker still thinks AI risk worries are worries about malevolent AI, despite multiple attempts to correct his misimpression:

John Lily: Silicon Valley techies are divided about whether to be fearful or dismissive of the idea of new super intelligent AI… How would you approach this issue?

Steven Pinker: …I think it’s a fallacy to conflate the ability to reason and solve problems with the desire to dominate and destroy, which sci-fi dystopias and robots-run-amok plots inevitably do. It’s a projection of evolved alpha-male psychology onto the concept of intelligence… So I don’t think that malevolent robotics is one of the world’s pressing problems.

Will someone please tell him to read… gosh, anything on the issue that isn’t a news story? He could also watch this talk by Stuart Russell if that’s preferable.

GSS Tutorial #1: Basic trends over time

Part of the series: How to research stuff.

Today I join Razib Khan’s quest to get bloggers to use the General Social Survey (GSS) more often.

The GSS is a huge collection of data on the demographics and attitudes of non-institutional adults (18+) living in the US. The data were collected by NORC via face-to-face, 90-minute interviews in randomly selected households, every year (almost) from 1972–1994, and every other year since then.

You can download the data and analyze it in R or SPSS or whatever, but the data can also be analyzed very easily via two easy-to-use web interfaces: the UC Berkeley SDA site and the GSS Data Explorer.

[Read more…]

Reply to Hawkins on AI risk

Jeff Hawkins, inventor of the Palm Pilot, has since turned his attention to neuro-inspired AI. In response to Elon Musk’s and Stephen Hawking’s recent comments on long-term AI risk, Hawkins argued that AI risk worriers suffer from three misconceptions:

  1. Intelligent machines will be capable of [physical] self-replication.
  2. Intelligent machines will be like humans and have human-like desires.
  3. Machines that are smarter than humans will lead to an intelligence explosion.

If you’ve been following this topic for a while, you might notice that Hawkins seems to be responding to something other than the standard arguments (now collected in Nick Bostrom’s Superintelligence) that are the source of Musk et al.’s concerns. Maybe Hawkins is responding to AI concerns as they are presented in Hollywood movies? I don’t know.

First, the Bostrom-Yudkowsky school of concern is not premised on physical self-replication by AIs. Self-replication does seem likely in the long run, but that’s not where the risk comes from. (As such, Superintelligence barely mentions physical self-replication at all.)

Second, these standard Bostrom-Yudkowsky arguments specifically deny that AIs will have human-like psychologies or desires. Certainly, the risk is not premised on such an expectation.

Third, Hawkins doesn’t seem to understand the concept of intelligence explosion being used by Musk and others, as I explain below.

[Read more…]

Minsky on AI risk in the 80s and 90s

Follow-up to: AI researchers on AI risk; Fredkin on AI risk in 1979.

Marvin Minsky is another AI scientist who has been thinking about AI risk for a long time, at least since the 1980s. Here he is in a 1983 afterword to Vinge’s novel True Names:

The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful… It will be tempting to do this, not just for the gain in power, but just to decrease our own human effort in the consideration and formulation of our own desires. If some genie offered you three wishes, would not your first one be, “Tell me, please, what is it that I want to the most!” The problem is that, with such powerful machines, it would require but the slightest powerful accident of careless design for them to place their goals ahead of ours, perhaps the well-meaning purpose of protecting us from ourselves, as in With Folded Hands, by Jack Williamson; or to protect us from an unsuspected enemy, as in Colossus by D.H. Jones…

And according to Eric Drexler (2015), Minsky was making the now-standard “dangerous-to-humans resource acquisition is a natural subgoal of almost any final goal” argument at least as early as 1990:

My concerns regarding AI risk, which center on the challenges of long-term AI governance, date from the inception of my studies of advanced molecular technologies, ca. 1977. I recall a later conversation with Marvin Minsky (they chairing my doctoral committee, ca. 1990) that sharpened my understanding of some of the crucial considerations: Regarding goal hierarchies, Marvin remarked that the high-level task of learning language is, for an infant, a subgoal of getting a drink of water, and that converting the resources of the universe into computers is a potential subgoal of a machine attempting to play perfect chess.

 

Fredkin on AI risk in 1979

Recently, Ramez Naam posted What Do AI Researchers Think of the Risks of AI? while guest-blogging at Marginal Revolution. Naam quoted several risk skeptics like Ng and Etzioni, while conspicuously neglecting to mention any prominent AI people who take the risk seriously, such as RussellHorvitz, and Legg. Scott Alexander at Slate Star Codex replied by quoting several prominent AI scientists past and present who seem to have taken the risk seriously. And let’s not forget that the leading AI textbook, by Russell and Norvig, devotes 3.5 pages to potential existential catastrophe from advanced AI, and cites MIRI’s work specifically.

Luckily we can get a clearer picture of current expert opinion by looking at the results of a recent survey which asked the top 100 most-cited living AI scientists when they thought AGI would arrive, how soon after AGI we’d get superintelligence, and what the likely social impact of superintelligence would be.

But at the moment, I just want to mention one additional computer scientist who seems to have been concerned about AI risk for a long time: Ed Fredkin.

In Pamela McCorduck’s history of the first few decades of AI, Machines Who Think (1979), Fredkin is quoted extensively on AI risk. Fredkin said (ch. 14):

Eventually, no matter what we do there’ll be artificial intelligences with independent goals. In pretty much convinced of that. There may be a way to postpone it. There may even be a way to avoid it, I don’t know. But its very hard to have a machine that’s a million times smarter than you as your slave.

…And pulling the plug is no way out. A machine that smart could act in ways that would guarantee that the plug doesn’t get pulled under any circumstances, regardless of its real motives — if it has any.

…I can’t persuade anyone else in the field to worry this way… They get annoyed when I mention these things. They have lots of attitudes, of course, but one of them is, “Well yes, you’re right, but it would be a great disservice to the world to mention all this.”…my colleagues only tell me to wait, not to make my pitch until it’s more obvious that we’ll have artificial intelligences. I think by then it’ll be too late. Once artificial intelligences start getting smart, they’re going to be very smart very fast. What’s taken humans and their society tens of thousands of years is going to be a matter of hours with artificial intelligences. If that happens at Stanford, say, the Stanford AI lab may have immense power all of a sudden. It’s not that the United States might take over the world, it’s that Stanford AI Lab might.

…And so what I’m trying to do is take steps to see that… an international laboratory gets formed, and that these ideas get into the minds of enough people. McCarthy, for lots of reasons, resists this idea, because he thinks the Russians would be untrustworthy in such an enterprise, that they’d swallow as much of the technology as they could, contribute nothing, and meanwhile set up a shadow place of their own running at the exact limit of technology that they could get from the joint effort. And as soon as that made some progress, keep it secret from the rest of us so they could pull ahead… Yes, he might be right, but it doesn’t matter. The international laboratory is by far the best plan; I’ve heard of no better plan. I still would like to see it happen: lets be active instead of passive…

…There are three events of equal importance, if you like. Event one is the creation of the universe. It’s a fairly important event. Event two is the appearance of life. Life is a kind of organizing principle which one might argue against if one didn’t understand enough — shouldn’t or couldn’t happen on thermodynamic grounds, or some such. And, third, there’s the appearance of artificial intelligence. It’s the question which deals with all questions… If there are any questions to be answered, this is how they’ll be answered. There can’t be anything of more consequence to happen on this planet.

Fredkin, now 80, continues to think about AI risk — about the relevance of certification to advanced AI systems, about the race between AI safety knowledge and AI capabilities knowledge, etc. I’d be very curious to learn what Fredkin thinks of the arguments in Superintelligence.

How much recent investment in AI?

Stuart Russell:

Industry [has probably invested] more in the last 5 years than governments have invested since the beginning of the field [in the 1960s].

My guess is that Russell doesn’t have a source for this, and this is just his guess based on his history in the field and his knowledge of what’s been happening lately. But it might very well be true; I’m not sure.

Also see How Big is the Field of Artificial Intelligence?

A reply to Wait But Why on machine superintelligence

Tim Urban of the wonderful Wait But Why blog recently wrote two posts on machine superintelligence: The Road to Superintelligence and Our Immortality or Extinction. These posts are probably now among the most-read introductions to the topic since Ray Kurzweil’s 2006 book.

In general I agree with Tim’s posts, but I think lots of details in his summary of the topic deserve to be corrected or clarified. Below, I’ll quote passages from his two posts, roughly in the order they appear, and then give my own brief reactions. Some of my comments are fairly nit-picky but I decided to share them anyway; perhaps my most important clarification comes at the end.

[Read more…]