How much recent investment in AI?

Stuart Russell:

Industry [has probably invested] more in the last 5 years than governments have invested since the beginning of the field [in the 1960s].

My guess is that Russell doesn’t have a source for this, and this is just his guess based on his history in the field and his knowledge of what’s been happening lately. But it might very well be true; I’m not sure.

Also see How Big is the Field of Artificial Intelligence?

A reply to Wait But Why on machine superintelligence

Tim Urban of the wonderful Wait But Why blog recently wrote two posts on machine superintelligence: The Road to Superintelligence and Our Immortality or Extinction. These posts are probably now among the most-read introductions to the topic since Ray Kurzweil’s 2006 book.

In general I agree with Tim’s posts, but I think lots of details in his summary of the topic deserve to be corrected or clarified. Below, I’ll quote passages from his two posts, roughly in the order they appear, and then give my own brief reactions. Some of my comments are fairly nit-picky but I decided to share them anyway; perhaps my most important clarification comes at the end.

[Read more…]

May 2015 links

Practical Typography is really good.

A short history of science blogging.

The evolution of popular music: USA 1960–2010.


AI stuff

The Economist’s May 9th cover story is on the long-term future of AI: short bitlong bit. The longer piece basically just reviews the state of AI and then says that there’s no existential threat in the near term. But of course almost everyone writing about AI risk agrees with that. Sigh.

6-minute video documentary about industrial robots replacing workers in China.

Bostrom’s TED talk on machine superintelligence.

PBS YouTube series It’s Okay to Be Smart gets AI risk basically right, though it overstates the probability of hard takeoff.

Sam Harris says more (wait ~20s for it to load) about the future of AI, on The Joe Rogan Experience. I think he significantly overstates how quickly AGI could be built (10 years is pretty inconceivable to me), and his “20,000 years of intellectual progress in a week” metaphor is misleading (because lots of intellectual progress requires relatively slow experimental interaction with the world). But I think he’s right about much else in the discussion.

NASA, “Certification considerations for adaptive systems

Lin, “Why ethics matters for autonomous cars

New Stephen Hawking talk on the future of AI

At Google Zeigeist, Hawking said:

Computers are likely to overtake humans in intelligence at some point in the next hundred years. When that happens, we will need to ensure that the computers have goals aligned with ours.

It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.

Artificial intelligence research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In the near term, world militaries are considering starting an arms race in autonomous-weapon systems that can choose and eliminate their own targets, while the U.N. is debating a treaty banning such weapons. Autonomous-weapon proponents usually forget to ask the most important question: What is the likely endpoint of an arms race, and is that desirable for the human race? Do we really want cheap AI weapons to become the Kalashnikovs of tomorrow, sold to criminals and terrorists on the black market? Given concerns about long-term controllability of ever-more-advanced AI systems, should we arm them, and turn over our defense to them? In 2010, computerized trading systems created the stock market “flash crash.” What would a computer-triggered crash look like in the defense arena? The best time to stop the autonomous-weapons arms race is now.

In the medium-term, AI may automate our jobs, to bring both great prosperity and equality.

Looking further ahead, there are no fundamental limits to what can be achieved. There is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains.

An explosive transition is possible, although it may play out differently than in the movies. As Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a singularity. One can imagine such technology out-smarting financial markets, out-inventing human researchers, out-manipulating human leaders, and potentially subduing us with weapons we cannot even understand.

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

In short, the advent of superintelligent AI would be either the best or the worst thing ever to happen to humanity, so we should plan ahead. If a superior alien civilization sent us a text message saying “We’ll arrive in a few decades,” would we just reply, “OK. Call us when you get here. We’ll leave the lights on.” Probably not, but this is more or less what has happened with AI.

Little serious research has been devoted to these issues, outside a few small nonprofit institutes. Fortunately, this is now changing. Technology pioneers Elon Musk, Bill Gates, and Steve Wozniak have echoed my concerns, and a healthy culture of risk assessment and awareness of societal implications is beginning to take root in the AI community. Many of the world’s leading AI researchers recently signed an open letter calling for the goal of AI to be redefined from simply creating raw, undirected intelligence to creating intelligence directed at benefiting humanity.

The Future of Life Institute, where I serve on the scientific advisory board, has just launched a global research program aimed at keeping AI beneficial.

When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technology such as nuclear weapons, synthetic biology, and strong artificial intelligence, we should instead plan ahead, and aim to get things right the first time, because it may be the only chance we will get.

I’m an optimist, and don’t believe in boundaries, neither for what we can do in our personal lives, nor for what life and intelligence can accomplish in our universe. This means that the brief history of intelligence that I have told you about is not the end of the story, but just the beginning of what I hope will be billions of years of life flourishing in the cosmos.

Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.

Cleverness over content

Norman Finkelstein:

Yeah there’s definitely a place for style and creativity… The problem is when… English majors decide they want to do politics and they have no background in the field of inquiry…

…most people don’t [have expertise] and what you have now is versions of George Packer, Paul Berman. There’s just a large number of people who know nothing about politics, don’t even think it’s important to do the research side. They simply substitute the clever turn of phrase. The main exemplar of that in recent times was Christopher Hitchens who really hadn’t a clue what he was talking about. But what he would do is come up with three arcane facts, and with these three arcane facts he would weave a long essay. So people say, oh look at that. They would react in wonder at one or the other pieces of arcana and then take him for a person who is knowledgable.

People unfortunately don’t care very much about content. They care about cleverness. That’s the basis on which The New York Review of Books recruits its authors, you have to be as they say, a good writer. And the same thing with The New Yorker. Now obviously there’s a great virtue to being a good writer, but not when it’s a substitute for content.

Books, music, etc. from April 2015


Ronson’s So You’ve Been Publicly Shamed was decent.

Carrier’s Proving History and On the Historicity of Jesus were decent. Of course, if they contained a bunch of bogus claims about matters of ancient history, I mostly wouldn’t know, but the published criticisms of these books that exist so far don’t seem to have identified any major problems on that front. I think the application of probability theory to historical method is less straightforward than Carrier presents it to be (esp. re: assignment of priors via reference classes), but he’s certainly right that his approach makes one’s arguments clearer and easier to productively criticize. Also, I continue to think Jesus mythicism should be considered quite plausible (> 20% likely), even though mainstream historians almost completely dismiss mythicism. As far as I can tell, these two books constitute mythicism’s best defense yet, though this isn’t saying much.

Goodman’s Future Crimes is inaccurate and hyperbolic about exponential tech trends and a few other things, but most of the book is a sober account about current and future tech-enabled criminal and security risks, and also accidentally constitutes a decent reply to the question “but how would an unfriendly AI affect the physical world?”

I got bored with The Powerhouse and gave up on it, but that might’ve been because I didn’t like the audiobook narrator.

I read Taubes’ Why We Get Fat and some sections of GCBC. I’m no expert in nutrition, but my impression is that Taubes doesn’t accurately represent the current state of knowledge, and avoids discussing evidence that contradicts his views. See e.g. Guyenet and Bray.

Vaillant’s Triumphs of Experience seemed pretty sketchy in how it was interpreting its evidence, but I probably won’t take the time to dig deep to confirm or disconfirm that suspicion. But e.g. the author often makes statements about the American population in general on the basis of results from a study for which nearly all the subjects were elite white Harvard males.

Zuk’s Paleofantasy covered lots of interesting material, but also spent lots of time on arguments like “Remember, evolution isn’t directed!” (Do paleo fans think it is?) and “Sure, farmers worked more than foragers, but foragers worked more than pre-human apes, so why not say everything went downhill after the pre-human apes?” (Uh, because we can’t make ourselves into pre-human apes, but we can live and eat more like foragers if we try?)

I skimmed Singer’s The Most Good You Can Do very quickly, since I’m already familiar with the arguments and stories found within. At a glance it looks like a good EA 101 book, probably the best currently available. Give it as a gift to your family and non-EA friends.

Favorite tracks or albums discovered this month:

Favorite movies discovered this month:

Other updates

Morris on the great divergence

From Why the West Rules — For Now, ch. 9, on the scientific revolution starting in 17th century Europe:

…contrary to what most of the ancients said, nature was not a living, breathing organism, with desires and intentions. It was actually mechanical. In fact, it was very like a clock. God was a clockmaker, switching on the interlocking gears that made nature run and then stepping back. And if that was so, then humans should be able to disentangle nature’s workings as easily as those of any other mechanism…

…This clockwork model of nature—plus some fiendishly clever experimenting and reasoning—had extraordinary payoffs. Secrets hidden since the dawn of time were abruptly, startlingly, revealed. Air, it turned out, was a substance, not an absence; the heart pumped blood around the body, like a water bellows; and, most bewilderingly, Earth was not the center of the universe.

Simultaneously, in 17th century China:

[A man named Gu Yanwu] turned his back on the metaphysical nitpicking that had dominated intellectual life since the twelfth century and, like Francis Bacon in England, tried instead to understand the world by observing the physical things that real people actually did.

For nearly forty years Gu traveled, filling notebooks with detailed descriptions of farming, mining, and banking. He became famous and others copied him, particularly doctors who had been horrified by their impotence in the face of the epidemics of the 1640s. Collecting case histories of actual sick people, they insisted on testing theories against real results. By the 1690s even the emperor was proclaiming the advantages of “studying the root of a problem, discussing it with ordinary people, and then having it solved.”

Eighteenth-century intellectuals called this approach kaozheng, “evidential research.” It emphasized facts over speculation, bringing methodical, rigorous approaches to fields as diverse as mathematics, astronomy, geography, linguistics, and history, and consistently developing rules for assessing evidence. Kaozheng paralleled western Europe’s scientific revolution in every way—except one: it did not develop a mechanical model of nature.

Like Westerners, Eastern scholars were often disappointed in the learning they had inherited from the last time social development approached the hard ceiling around forty-three points on the index (in their case under the Song dynasty in the eleventh and twelfth centuries). But instead of rejecting its basic premise of a universe motivated by spirit (qi) and imagining instead one running like a machine, Easterners mostly chose to look back to still more venerable authorities, the texts of the ancient Han dynasty.

[Read more…]

April links

This is why I consume so many books and articles even though I don’t remember most of their specific content when asked about them. I’m also usually doing a breadth-first search to find candidates that might be worth a deep dive. E.g. I think I found Ian Morris faster than I otherwise would have because I’ve been doing breadth-first search.

Notable lessons (so far) from the Open Philanthropy Project.

A prediction market for behavioral economics replication attempts.

Towards a 21st century orchestral canon. And a playlist resulting from that discussion.


AI stuff

Tegmark, Russell, and Horvitz on the future of AI on Science Friday.

Eric Drexler has a new FHI technical report on superintelligence safety.

Brookings Institute blog post about AI safety, regulation, and superintelligence.

Forager Violence and Detroit

Figure 2.1 of Foragers, Farmers, and Fossil Fuels:

Figure 2.1.

Why is Detroit the only city mentioned on a map otherwise dedicated to groups of hunter-gatherers? Is Ian Morris making a joke about Detroit being a neo-primitivist hellscape of poverty and violence?

No, of course not.

Figure 2.1. is just a map of all the locations and social groups mentioned in chapter 2, and it just so happens that Detroit is the only city mentioned. Here’s the context:

Forager bands vary in their use of violence, as they vary in almost everything, but it took anthropologists a long time to realize how rough hunter-gatherers could be. This was not because the ethnographers all got lucky and visited peculiarly peaceful foraging folk, but because the social scale imposed by foraging is so small that even high rates of murder are difficult for outsiders to detect. If a band with a dozen members has a 10 percent rate of violent death, it will suffer roughly one homicide every twenty-five years; and since anthropologists rarely stay in the field for even twenty-five months, they will witness very few violent deaths. It was this demographic reality that led Elizabeth Marshall Thomas to title her sensitive 1959 ethnography of the !Kung The Gentle People — even though their murder rate was much the same as what Detroit would endure at the peak of its crack cocaine epidemic.

Okay, well… I guess that’s sort of like using Detroit as an example of a neo-primitivist hellscape of poverty and violence.

… and in case you think it’s mean of me to pick on Detroit, I’ll mention in my defense that I thought the first season of Silicon Valley was hilarious (I live in the Bay Area), and my girlfriend decided she couldn’t watch it because it was too painfully realistic.

Effective altruism as opportunity or obligation?

Is effective altruism (EA) an opportunity or an obligation? My sense is that Peter Singer, Oxford EAs, and Swiss EAs tend to think of EA as a moral obligation, while GiveWell and other Bay Area EAs are more likely to see EA as a (non-obligatory) exciting opportunity.

In the Harvard Political Review, Ross Rheingans-Yoo recently presented the “exciting opportunity” flavor of EA:

Effective altruism [for many] is an opportunity and a question (I can help! How and where am I needed?), not an obligation and an ideology (You are a monster unless you help this way!), and it certainly does not demand that you sacrifice your own happiness to utilitarian ends. It doesn’t ask anyone to “give until it hurts”; an important piece of living to help others is setting aside enough money to live comfortably (and happily) first, and not feeling bad about living on that.

I tend to think about EA from the “exciting opportunity” perspective, but I think it’s only fair to remember that there is another major school of thought on this, which does argue for EA as a moral obligation, ala Singer’s famous article “Famine, Affluence, and Morality.”

Musk and Gates on superintelligence and fast takeoff

Recently, Baidu CEO Robin Li interviewed Bill Gates and Elon Musk about a range of topics, including machine superintelligence. Here is a transcript of that section of their conversation:

Li: I understand, Elon, that recently you said artificial intelligence advances are like summoning the demon. That generated a lot of hot debate. Baidu’s chief scientist Andrew Ng recently said… that worrying about the dark side of artificial intelligence is like worrying about overpopulation on Mars… He said it’s a distraction to those working on artificial intelligence.

Musk: I think that’s a radically inaccurate analogy, and I know a bit about Mars. The risks of digital superintelligence… and I want you to appreciate that it wouldn’t just be human-level, it would be superhuman almost immediately; it would just zip right past humans to be way beyond anything we could really imagine.

A more perfect analogy would be if you consider nuclear research, with its potential for a very dangerous weapon. Releasing the energy is easy; containing that energy safely is very difficult. And so I think the right emphasis for AI research is on AI safety. We should put vastly more effort into AI safety than we should into advancing AI in the first place. Because it may be good, or it may be bad. And it could be catastrophically bad if there could be the equivalent to a nuclear meltdown. So you really want to emphasize safety.

So I’m not against the advancement of AI… but I do think we should be extremely careful. And if that means that it takes a bit longer to develop AI, then I think that’s the right trail. We shouldn’t be rushing headlong into something we don’t understand.

Li: Bill, I know you share similar views with Elon, but is there any difference between you and him?

Gates: I don’t think so. I mean he actually put some money out to help get somewhere going on this, and I think that’s absolutely fantastic. For people in the audience who want to read about this, I highly recommend this Bostrom book called Superintelligence

We have a general purpose learning algorithm that evolution has endowed us with, and it’s running in an extremely slow computer. Very limited memory size, ability to send data to other computers, we have to use this funny mouth thing here… Whenever we build a new one it starts over and it doesn’t know how to walk. So believe me, as soon as this algorithm [points to head], taking experience and turning it into knowledge, which is so amazing and which we have not done in software, as soon as you do that, it’s not clear you’ll even know when you’re just at the human level. You’ll be at the superhuman level almost as soon as that algorithm is implanted, in silicon. And actually as time goes by that silicon piece is ready to be implanted, the amount of knowledge, as soon as it has that learning algorithm it just goes out on the internet and reads all the magazine and books… we have essentially been building the content based for the super intelligence.

So I try not to get too exercised about this but when people say it’s not a problem, then I really start to [shakes head] get to a point of disagreement. How can they not see what a huge challenge this is?

Books, music, etc. from March 2015


Livio’s Brilliant Blunders was decent.

Fox’s The Game Changer didn’t have much concrete advice. Mostly it was a sales pitch for motivation engineering without saying much about how to do it within an organization.

Drucker’s Management Challenges for the 21st Century was a mixed bag, and included as much large-scale economic speculation as it did management advice.

Adams’ How to Fail at Almost Everything and Still Win Big was a very mixed bag of advice, which then tries unconvincingly to say it isn’t a book of advice.

Favorite tracks or albums discovered this month:

Stuff I wrote elsewhere:

Michael Oppenheimer on how the IPCC got started


US support was probably critical to IPCC’s establishment. And why did the US government support it? Assistant Undersecretary of State Bill Nitze wrote to me a few years later saying that our group’s activities played a significant role. Among other motivations, the US government saw the creation of the IPCC as a way to prevent the activism stimulated by my colleagues and me from controlling the policy agenda.

I suspect that the Reagan Administration believed that, in contrast to our group, most scientists were not activists, and would take years to reach any conclusion on the magnitude of the threat. Even if they did, they probably would fail to express it in plain English. The US government must have been quite surprised when IPCC issued its first assessment at the end of 1990, stating clearly that human activity was likely to produce an unprecedented warming.

The IPCC’s first assessment laid the groundwork for negotiation of the UN Framework Convention on Climate Change (UNFCCC), signed at the Earth Summit in 1992. In a sense, the UNFCCC and its progeny, the Kyoto Protocol, were unintended consequences of the US support for establishment of IPCC – not what the Reagan Administration had in mind!

Modern classical music, summed up in one paragraph

From Burkholder et al.’s History of Western Music, 9th edition (pp. 938-939):

The demands on performers by composers like Berio and Carter [and] Babbitt, Stockhausen, and Boulez, were matched by their demands on listeners. Each piece was difficult to understand in its own right, using a novel musical language even more distant from the staples of the concert repertoire than earlier modernist music had been. Compounding listeners’ difficulties was that each composer and often each piece used a unique approach, so that even after getting to know one such work, encountering the next one could be like starting from scratch.

March 2015 links, part 2

It’ll never work! A collection of experts being wrong about what is technologically feasible.

GiveWell update on its investigations into global catastrophic risks. My biggest disagreement is that I think nano-risk deserves more attention, if someone competent can be found to analyze the risks in more detail. GiveWell’s prioritization of biosecurity makes complete sense given their criteria.

Online calibration test with database of 150,000+ questions.

An ambitious Fermi estimate exercise: Estimating the energy cost of artificial evolution.

Nautilus publishes an excellent and wide-ranging interview with Scott Aaronson.

Gelman on a great old paper by Meehl.


AI stuff

Video: robot autonomously folds pile of 5 previously unseen towels.

Somehow I had previously missed the Dietterich-Horvitz letter on Benefits and Risks of AI.

Robin Hanson reviews Martin Ford’s new book on tech unemployment.

Heh. That “stop the robots” campaign at SXSW was a marketing stunt for a dating app.

Winfield, Towards an Ethical Robot. They actually bothered to build simple consequentialist robots that obey a kind-of Asimovian rule.

Clarifying the Nathan Collins article on MIRI

FLI now has a News page. One of its first articles is an article on MIRI by Nathan Collins. I’d like to clarify one passage that’s not necessarily incorrect, but which could lead to misunderstanding:

… consider assigning a robot with superhuman intelligence the task of making paper clips. The robot has a great deal of computational power and general intelligence at its disposal, so it ought to have an easy time figuring out how to fulfill its purpose, right?

Not really. Human reasoning is based on an understanding derived from a combination of personal experience and collective knowledge derived over generations, explains MIRI researcher Nate Soares, who trained in computer science in college. For example, you don’t have to tell managers not to risk their employees’ lives or strip mine the planet to make more paper clips. But AI paper-clip makers are vulnerable to making such mistakes because they do not share our wealth of knowledge. Even if they did, there’s no guarantee that human-engineered intelligent systems would process that knowledge the same way we would.

MIRI’s worry is not that a superhuman AI will find it difficult to fulfill its programmed goal of — to use a silly, arbitrary example — making paperclips. Our worry is that a superhuman AI will be very, very good at achieving its programmed goals, and that unfortunately, the best way to make lots of paperclips (or achieve just about any other goal) involves killing all humans, so that we can’t interfere with the AI’s paperclip making, and so that the AI can use the resources on which our lives depend to make more paperclips. See Bostrom’s “The Superintelligent Will” for a primer on this.

Moreover, a superhuman AI may very well share “our wealth of knowledge.” It will likely be able to read and understand all of Wikipedia, and every history book on Google Books, and the Facebook timeline of more than a billion humans, and so on. It may very well realize that when we programmed it with the goal to make paperclips (or whatever), we didn’t intend for it to kill us all as a side effect.

But that doesn’t matter. In this scenario, we didn’t program the AI to do as we intended. We programmed it to make paperclips. The AI knows we don’t want it to use up all our resources, but it doesn’t care, because we didn’t program it to care about what we intended. We only programmed it to make paperclips, so that’s what it does — very effectively.

“Okay, so then just make sure we program the superhuman AI to do what we intend!”

Yes, exactly. That is the entire point of MIRI’s research program. The problem is that the instruction “do what we intend, in every situation including ones we couldn’t have anticipated, and even as you reprogram yourself to improve your ability to achieve your goals” is incredibly difficult to specify in computer code.

Nobody on Earth knows how to do that, not even close. So our attitude is: we’d better get crackin’.

March 2015 links

Cotton-Barratt, Allocating risk mitigation across time.

The new Ian Morris book sounds very Hansonian, which probably means it’ll end up being one of my favorite books of 2015 when I have a chance to read it.

Why do we pay pure mathematicians? A dialogue.

Watch a FiveThirtyEight article get written, keystroke by keystroke. Scott Alexander, will you please record yourself writing one blog post?

Grace, The economy of weirdness.

Kahneman interviews Harari about the future.

On March 14th, there will be wrap parties for Harry Potter and the Methods of Rationality in at least 15 different countries. I’m assuming this is another first for a fanfic.


AI Stuff

YC President Sam Altman on superhuman AI: part 1, part 2. I agree with most of what he writes, the biggest exceptions being that I think (1) AGI probably isn’t the Great Filter, (2) AI progress isn’t a double exponential, and (3) I don’t have much of an opinion on the role of regulation, as it’s not something I’ve tried hard to figure out.

Stuart Russell and Rodney Brooks debated the value alignment problem at Davos 2015. (Watch at 2x speed.)

Pretty good coverage of MIRI’s value learning paper at Nautilus.

Books, music, etc. from February 2015

Decent books:

As Bryan Caplan wroteThe Moral Case for Fossil Fuels was surprisingly good. I think the book is factually inaccurate and cherry-picked in several places, and it seems fairly motivated throughout, but nevertheless I think the big picture argument basically goes through, and it’s an enjoyable read.

I didn’t discover any albums or movies I loved in February 2015, but I did finish Breaking Bad, which probably beats out The Sopranos and The Wire as the most consistently great TV drama ever.