So apparently this is why we have positive psychology but not evidence-based psychological treatment

Here’s Marty Seligman, past president of the American Psychological Association (APA):

APA presidents are supposed to have an initiative and… I thought mine could be “evidence-based treatment and prevention.” So I went to my friend, Steve Hyman, the director of [National Institute of Mental Health]. He was thrilled and told me he would chip in $40 million dollars if I could get APA working on evidence-based treatment.

So I told CAPP [which owns the APA] about my plan and about NIMH’s willingness. I felt the room get chillier and chillier. I rattled on. Finally, the chair of CAPP memorably said, “What if the evidence doesn’t come out in our favor?”

…I limped my way to [my friend’s] office for some fatherly advice.

“Marty,” he opined, “you are trying to be a transactional president. But you cannot out-transact these people…”

And so I proposed that Psychology turn its… attention away from pathology and victimology and more toward what makes life worth living: positive emotion, positive character, and positive institutions. I never looked back and this became my mission for the next fifteen years. The endeavor… caught on.

My post title is sort-of joking. Others have pushed on evidence-based psychology while Seligman focused on positive psychology, and Seligman certainly wouldn’t say that we “don’t have” evidence-based psychological treatment. But I do maintain that evidence-based psychology is not yet as well-developed as evidence-based medicine, even given EBM’s many problems.

Karpathy on nukes

OpenAI deep learning researcher Andrej Karpathy on Rhodes’ Making of the Atomic Bomb:

Unfortunately, we live in a universe where the laws of physics feature a strong asymmetry in how difficult it is to create and to destroy. This observation is also not reserved to nuclear weapons – more generally, technology monotonically increases the possible destructive damage per person per dollar. This is my favorite resolution to the Fermi paradox.

As I am a scientist myself, I was particularly curious about the extent to which the nuclear scientists who conceived and designed the bomb influenced the ethical/political discussions. Unfortunately, it is clearly the case that the scientists were quickly marginalized and, in effect, told to shut up and just help build the bomb. From the very start, Roosevelt explicitly wanted policy considerations restricted to a small group that excluded any scientists. As some of the more prominent examples of scientists trying to influence policy, Bohr advocated for establishing an “Open World Consortium” and sharing information about the bomb with the Soviet Union, but this idea was promptly shut down by Churchill. In this case it’s not clear what effect it would have had and, in any case, the Soviets already knew a lot through espionage. Bohr also held the seemingly naive notion that scientists should continue publishing all nuclear research during the second world war as he felt that science should be completely open and rise above national disputes. Szilard strongly opposed this openness internationally, but advocated for more openness within the Manhattan project for sake of efficiency. This outraged Groves who was obsessed with secrecy. In fact, Szilard was almost arrested, suspected to be a spy, and placed under a comical surveillance that mostly uncovered his frequent visits to a chocolate store.

Henry Kissinger on smarter-than-human AI

Henry Kissinger, speaking with The Economist:

It is undoubtedly the case that modern technology poses challenges to world order and world order stability that are absolutely unprecedented. Climate change is one of them. I personally believe that artificial intelligence is a crucial one, lest we wind up… creating instruments in relation to which we are like the Incas to the Spanish, [such that] our own creations have a better capacity to calculate than we do. It’s a problem we need to understand on a global basis.

For reference, here is Wikipedia on the Spanish conquest of the Inca empire.

Henry Kissinger also addressed artificial intelligence in a recent interview with The Atlantic, though in this case he probably was not referring to smarter-than-human AI:

A military conflict between [China and the USA], given the technologies they possess, would be calamitous. Such a conflict would force the world to divide itself. And it would end in destruction, but not necessarily in victory, which would likely prove too difficult to define. Even if we could define victory, what in the wake of utter destruction could the victor demand of the loser? I am speaking of not merely the force of our weapons, but the unknowability of the consequences of some of them, such as cyberweapons. Traditional arms-control negotiations necessitated that each side tell the other what its capabilities were as a prelude to limiting those capacities. Yet with cyber, each country will be extremely reluctant to let others know its capabilities. Thus, there is no self-evident negotiated way to contain cyberwarfare. And artificial intelligence compounds this problem. Machines that can learn from their own experience and communicate with one another on their own raise both a practical and a moral imperative to find a way to keep mankind from destroying itself. The United States and China must strive to come to an understanding about the nature of their co-evolution.

How German nuclear scientists reacted to the news of Hiroshima

As part of Operation Epsilon, captured German nuclear physicists were secretly recorded at Farm Hall, a house in England where they were interned. Here’s how the German scientists reacted to the news (on August 6th, 1945) that an atomic bomb had been dropped on Hiroshima, taken from the now-declassified transcripts (pp. 116-122 of this copy):

Otto Hahn (co-discoverer of nuclear fission): I don’t believe it… They are 50 years further advanced than we.

Werner Heisenberg (leading figure of the German atomic bomb effort): I don’t believe a word of the whole thing. They must have spent the whole of their £500,000,000 in separating isotopes: and then it is possible.

In a margin note, the editor points out: “Heisenberg’s figure of £500 million is accurate. At the then-official exchange rate it is equal to $2 billion. President Truman’s account of the expense, released on August 6, stated: ‘We spent $2,000,000,000 on the greatest scientific gamble in history — and won.’ …Isotope separation accounted for a large share but by no means the whole of that…”

Hahn: I didn’t think it would be possible for another 20 years.

Karl Wirtz (head of reactor construction at a German physics institute): I’m glad we didn’t have it.

Carl Friedrich von Weizsäcker (theoretical physicist): I think it is dreadful of the Americans to have done it. I think it is madness on their part.

Heisenberg: One can’t say that. One could equally well say “That’s the quickest way of ending the war.”

Hahn: That’s what consoles me.

Heisenberg: I don’t believe a word about the bomb but I may be wrong…

Hahn: Once I wanted to suggest that all uranium should be sunk to the bottom of the ocean. I always thought that one could only make a bomb of such a size that a whole province would be blown up.

Weizsäcker: How many people were working on V1 and V2?

Kurt Diebner (physicist and organizer of the German Army’s fission project): Thousands worked on that.

Heisenberg: We wouldn’t have had the moral courage to recommend to the government in the spring of 1942 that they should employ 120,000 men just for building the thing up.

Weizsäcker: I believe the reason we didn’t do it was because all the physicists didn’t want to do it, on principle. If we had all wanted Germany to win the war we would have succeeded.

Hahn: I don’t believe that but I am thankful we didn’t succeed.

There is much more of interest in these transcripts. It is fascinating to eavesdrop on leading scientists’ unfiltered comments as they realize how badly their team was beaten to the finish line, and that the whole world has stepped from one era into another.

Hanson on intelligence explosion, from Age of Em

Economist Robin Hanson is among the most informed critics of the plausibility of what he calls a “local” intelligence explosion. He’s written on the topic many times before (most of it collected here), but here’s one more take from him on it, from Age of Em:

…some people foresee a rapid local “intelligence explosion” happening soon after a smart AI system can usefully modify its own mental architecture…

In a prototypical local explosion scenario, a single AI system with a supporting small team starts with resources that are tiny on a global scale. This team finds and then applies a big innovation in AI software architecture to its AI system, which allows this team plus AI combination to quickly find several related innovations. Together this innovation set allows this AI to quickly become more effective than the entire rest of the world put together at key tasks of theft or innovation.

That is, even though an entire world economy outside of this team, including other AIs, works to innovate, steal, and protect itself from theft, this one small AI team becomes vastly better at some combination of (1) stealing resources from others, and (2) innovating to make this AI “smarter,” in the sense of being better able to do a wide range of mental tasks given fixed resources. As a result of being better at these things, this AI quickly grows the resources that it controls and becomes more powerful than the entire rest of the world economy put together, and so it takes over the world. And all this happens within a space of days to months.

Advocates of this explosion scenario believe that there exists an as-yet-undiscovered but very powerful architectural innovation set for AI system design, a set that one team could find first and then keep secret from others for long enough. In support of this belief, advocates point out that humans (1) can do many mental tasks, (2) beat out other primates, (3) have a common IQ factor explaining correlated abilities across tasks, and (4) display many reasoning biases. Advocates also often assume that innovation is vastly underfunded today, that most economic progress comes from basic research progress produced by a few key geniuses, and that the modest wage gains that smarter people earn today vastly underestimate their productivity in key tasks of theft and AI innovation. In support, advocates often point to familiar myths of geniuses revolutionizing research areas and weapons.

Honestly, to me this local intelligence explosion scenario looks suspiciously like a super-villain comic book plot. A flash of insight by a lone genius lets him create a genius AI. Hidden in its super-villain research lab lair, this genius villain AI works out unprecedented revolutions in AI design, turns itself into a super-genius, which then invents super-weapons and takes over the world. Bwa-ha-ha.

Many arguments suggest that this scenario is unlikely (Hanson and Yudkowsky 2013). Specifically, (1) in 60 years of AI research high-level architecture has only mattered modestly for system performance, (2) new AI architecture proposals are increasingly rare, (3) algorithm progress seems driven by hardware progress (Grace 2013), (4) brains seem like ecosystems, bacteria, cities, and economies in being very complex systems where architecture matters less than a mass of capable detail, (5) human and primate brains seem to differ only modestly, (6) the human primate difference initially only allowed faster innovation, not better performance directly, (7) humans seem to have beat other primates mainly via culture sharing, which has a plausible threshold effect and so doesn’t need much brain difference, (8) humans are bad at most mental tasks irrelevant for our ancestors, (9) many human “biases” are useful adaptations to social complexity, (10) human brain structure and task performance suggest that many distinct modules contribute on each task, explaining a common IQ factor (Hampshire et al. 2012), (11) we expect very smart AI to still display many biases, (12) research today may be underfunded, but not vastly so (Alston et al. 2011; Ulku 2004), (13) most economic progress does not come from basic research, (14) most research progress does not come from a few geniuses, and (15) intelligence is not vastly more productive for research than for other tasks.

(And yes, the entire book is roughly this succinct and dense with ideas.)

Rockefeller’s chief philanthropy advisor

Frederick T. Gates was the chief philanthropic advisor to oil tycoon John D. Rockefeller, arguably the richest person in modern history and one of the era’s greatest philanthropists. Here’s a brief profile from Rockefeller biography Titan (h/t @danicgross):

Like Rockefeller himself, Gates yoked together two separate selves—one shrewd and worldly, the other noble and high-flown…

After graduating from the seminary in 1880, Gates was assigned his first pastorate in Minnesota. When his young bride, Lucia Fowler Perkins, dropped dead from a massive internal hemorrhage after sixteen months of marriage, the novice pastor not only suffered an erosion of faith but began to question the competence of American doctors — a skepticism that later had far-reaching ramifications for Rockefeller’s philanthropies…

Eventually Gates became Rockefeller’s philanthropic advisor, and:

What Gates gave to his boss was no less vital. Rockefeller desperately needed intelligent assistance in donating his money at a time when he could not draw on a profession of philanthropic experts. Painstakingly thorough, Gates combined moral passion with great intellect. He spent his evenings bent over tomes of medicine, economics, history, and sociology, trying to improve himself and find clues on how best to govern philanthropy. Skeptical by nature, Gates saw a world crawling with quacks and frauds, and he enjoyed grilling people with trenchant questions to test their sincerity. Outspoken, uncompromising, he never hesitated to speak his piece to Rockefeller and was a peerless troubleshooter.

For some details on Rockefeller’s philanthropic successes, see here.

Stich on conceptual analysis

Stich (1990), p. 3:

On the few occasions when I have taught the “analysis of knowledge” literature to undergraduates, it has been painfully clear that most of my students had a hard time taking the project seriously. The better students were clever enough to play fill-in-the-blank with ‘S knows that p if and only _____’ … But they could not, for the life of them, see why anybody would want to do this. It was a source of ill-concealed amazement to these students that grown men and women would indulge in this exercise and think it important — and of still greater amazement that others would pay them to do it! This sort of discontent was all the more disquieting because deep down I agreed with my students. Surely something had gone very wrong somewhere when clever philosophers, the heirs to the tradition of Hume and Kant, devoted their time to constructing baroque counterexamples about the weird ways in which a man might fail to own a Ford… for about as long as I can remember I have had deep…misgivings about the project of analyzing epistemic notions.

Social justice and evidence

Galileo’s Middle Finger has some good coverage of several case studies in politicized science, and ends with a sermon on the importance of evidence to social justice:


When I joined the early intersex-rights movement, although identity activists inside and outside academia were a dime a dozen, it was pretty uncommon to run into evidence-based activists… Today, all over the place, one finds activist groups collecting and understanding data, whether they’re working on climate change or the rights of patients, voters, the poor, LGBT people, or the wrongly imprisoned…

The bad news is that today advocacy and scholarship both face serious threats. As for social activism, while the Internet has made it cheaper and easier than ever to organize and agitate, it also produces distraction and false senses of success. People tweet, blog, post messages on walls, and sign online petitions, thinking somehow that noise is change. Meanwhile, the people in power just wait it out, knowing that the attention deficit caused by Internet overload will mean the mob will move on to the next house tomorrow, sure as the sun comes up in the morning. And the economic collapse of the investigative press caused by that noisy Internet means no one on the outside will follow through to sort it out, to tell us what is real and what is illusory. The press is no longer around to investigate, spread stories beyond the already aware, and put pressure on those in power to do the right thing.

The threats to scholars, meanwhile, are enormous and growing. Today over half of American university faculty are in non-tenure-track jobs. (Most have not consciously chosen to live without tenure, as I have.) Not only are these people easy to get rid of if they make trouble, but they are also typically loaded with enough teaching and committee work to make original scholarship almost impossible… Add to this the often unfair Internet-based attacks on researchers who are perceived as promoting dangerous messages, and what you end up with is regression to the safe — a recipe for service of those already in power.

Perhaps most troubling is the tendency within some branches of the humanities to portray scholarly quests to understand reality as quaint or naive, even colonialist and dangerous. Sure, I know: Objectivity is easily desired and impossible to perfectly achieve, and some forms of scholarship will feed oppression, but to treat those who seek a more objective understanding of a problem as fools or de facto criminals is to betray the very idea of an academy of learners. When I run into such academics — people who will ignore and, if necessary, outright reject any fact that might challenge their ideology, who declare scientific methodologies “just another way of knowing” — I feel this crazy desire to institute a purge… Call me ideological for wanting us all to share a belief in the importance of seeking reliable, verifiable knowledge, but surely that is supposed to be the common value of the learned.

…I want to say to activists: If you want justice, support the search for truth. Engage in searches for truth. If you really want meaningful progress and not just temporary self-righteousness, carpe datum. You can begin with principles, yes, but to pursue a principle effectively, you have to know if your route will lead to your destination. If you must criticize scholars whose work challenges yours, do so on the evidence, not by poisoning the land on which we all live.

…Here’s the one thing I now know for sure after this very long trip: Evidence really is an ethical issue, the most important ethical issue in a modern democracy. If you want justice, you must work for truth.

Naturally, the sermon is more potent if you’ve read the case studies in the book.

Check the original source

From Segerstrale (2000), p. 27:

In 1984 I was able to shock my class of well-intended liberal students at Smith College by giving them the assignment to compare [Stephan] Chorover’s [critical] representation of passages of [E.O. Wilson’s] Sociobiology with Wilson’s original text. The students, who were deeply suspicious of Wilson and spontaneous champions of his critics, embarked on this homework with gusto. Many students were quite dismayed at their own findings and angry with Chorover. This surely says something, too, about these educated laymen’s relative innocence regarding what can and cannot be done in academia.

I wish this kind of exercise was more common. Another I would suggest is to compare critics’ representations of Dreyfus’ “Alchemy and Artificial Intelligence” with the original text (see here).

The first AI textbook, on the control problem

The earliest introductory AI textbook I know about — excluding mere “paper collections” like Computers and Thought (1963) — is Jackson’s Introduction to Artificial Intelligence (1974).

It discusses AGI and the control problem starting on page 394:

If [AI] research is unsuccessful at producing a general artificial intelligence, over a period of more than a hundred years, then its failure may raise some serious doubt among many scientists as to the finite describability of man and his universe. However, the evidence presented in this book makes it seem likely that artificial intelligence research will be successful, that a technology will be developed which is capable of producing machines that can demonstrate most, if not all, of the mental abilities of human beings. Let us therefore assume that this will happen, and imagine two worlds that might result.

[First,] …It is not difficult to envision actualities in which an artificial intelligence would exert control over human beings, yet be out of their control.

Given that intelligent machines are to be used, the question of their control and noncontrol must be answered. If a machine is programmed to seek certain ends, how are we to insure that the means it chooses to employ are agreeable to people? A preliminary solution to the problem is given by the fact that we can specify state-space problems to require that their solution paths shall not pass through certain states (see Chapter 3). However, the task of giving machines more sophisticated value systems, and especially of making them ‘ethical,’ has not yet been investigated by AI researchers…

The question of control should be coupled with the ‘lack of understanding’ question; that is, the possibility exists that intelligent machines might be too complicated for us to understand in situations that require real-time analyses (see the discussion of evolutionary programs in Chapter 8). We could conceivably always demand that a machine give a complete output of its reasoning on a problem; nevertheless that reasoning might not be effectively understandable to us if the problem itself were to determine a time limit for producing a solution. In such a case, if we were to act rationally, we might have to follow the machine’s advice without understanding its ‘motives’…

It has been suggested that an intelligent machine might arise accidentally, without our knowledge, through some fortuitous interconnection of smaller machines (see Heinlein, 1966). If the smaller machines each helped to control some aspect of our economy or defense, the accidental intelligent might well act as a dictator… It seems highly unlikely that this will happen, especially if we devote sufficient time to studying the non-accidental systems we implement.

A more significant danger is that artificial intelligence might be used to further the interests of human dictators. A limited supply of intelligent machines in the hands of a human dictator might greatly increase his power over other human beings, perhaps to the extent of giving him complete censorship and supervision of the public…

Let us now paint another, more positive picture of the world that might result from artificial intelligence research… It is a world in which man and his machines have reached a state of symbiosis…

The benefits humanity might gain from achieving such a symbiosis are enormous. As mentioned [earlier], it may be possible for artificial intelligence to greatly reduce the amount of human labor necessary to operate the economy of the world… Computers and AI research may play an important part in helping to overcome the food, population, housing, and other crises that currently grip the earth… Artificial intelligence may eventually be used to… partially automated the development of science itself… Perhaps artificial intelligence will someday be used in automatic teachers… and perhaps mechanical translators will someday be developed which will fluently translate human languages. And (very perhaps) the day may eventually come when the ‘household robot’ and the ‘robot chauffeur’ will be a reality…

In some ways it is reassuring that the progress in artificial intelligence research is proceeding at a relatively slow but regular pace. It should be at least a decade before any of these possibilities becomes an actuality, which will give us some time to consider in more detail the issues involved.

“Beyond the scope of this paper”

From AI scientist Drew McDermott, in 1976:

In this paper I have criticized AI researchers very harshly. Let me express my faith that people in other fields would, on inspection, be found to suffer from equally bad faults. Most AI workers are responsible people who are aware of the pitfalls of a difficult field and produce good work in spite of them. However, to say anything good about anyone is beyond the scope of this paper.

Scaruffi on art music

From the preface to his in-progress history of avant-garde music:

Art Music (or Sound Art) differs from Commercial Music the way a Monet painting differs from IKEA furniture. Although the border is frequently fuzzy, there are obvious differences in the lifestyles and careers of the practitioners. Given that Art Music represents (at best) 3% of all music revenues, the question is why anyone would want to be an art musician at all. It is like asking why anyone would want to be a scientist instead of joining a technology startup. There are pros that are not obvious if one only looks at the macroscopic numbers. To start with, not many commercial musicians benefit from that potentially very lucrative market. In fact, the vast majority live a rather miserable existence. Secondly, commercial music frequently implies a lifestyle of time-consuming gigs in unattractive establishments. But fundamentally being an art musician is a different kind of job, more similar to the job of the scientific laboratory researcher (and of the old-fashioned inventor) than to the job of the popular entertainer. The art musician is pursuing a research program that will be appreciated mainly by his peers and by the “critics” (who function as historians of music), not by the public. The art musician is not a product to be sold in supermarkets but an auteur. The goal of an art musician is, first and foremost, to do what s/he feels is important and, secondly, to secure a place in the history of human civilization. Commercial musicians live to earn a good life. Art musicians live to earn immortality. (Ironically, now that we entered the age of the mass market, a pop star may be more likely to earn immortality than the next Beethoven, but that’s another story). Art music knows no stylistic boundaries: the division in classical, jazz, rock, hip hop and so forth still makes sense for commercial music (it basically identifies the sales channel) but ever less sense for art music whose production, distribution and appreciation methods are roughly the same regardless of whether the musician studied in a Conservatory, practiced in a loft or recorded at home using a laptop.

Medical ghostwriting

From Mushak & Elliott (2015):

Pharmaceutical companies hire “medical education and communication companies” (MECCs) to create sets of journal articles (and even new journals) designed to place their drugs in a favorable light and to assist in their marketing efforts (Sismondo 2007, 2009; Elliott 2010). These articles are frequently submitted to journals under the names of prominent academic researchers, but the articles are actually written by employees of the MECCs (Sismondo 2007, 2009). While it is obviously difficult to determine what proportion of the medical literature is produced in this fashion, one study used information uncovered in litigation to determine that more than half of the articles published on the antidepressant Zoloft between 1998 and 2000 were ghostwritten (Healy and Cattell 2003). These articles were published in more prestigious journals than the non-ghostwritten articles and were cited five times more often. Significantly, they also painted a rosier picture of Zoloft than the others.

CGP Grey on Superintelligence

CGP Grey recommends Nick Bostrom’s Superintelligence:

The reason this book [Superintelligence]… has stuck with me is because I have found my mind changed on this topic, somewhat against my will.

…For almost all of my life… I would’ve placed myself very strongly in the camp of techno-optimists. More technology, faster… it’s nothing but sunshine and rainbows ahead… When people would talk about the “rise of the machines”… I was always very dismissive of this, in no small part because those movies are ridiculous… [and] I was never convinced there was any kind of problem here.

But [Superintelligence] changed my mind so that I am now much more in the camp of [thinking that the development of general-purpose AI] can seriously present an existential threat to humanity, in the same way that an asteroid collision… is what you’d classify as a serious existential threat to humanity — like, it’s just over for people.

…I keep thinking about this because I’m uncomfortable with having this opinion. Like, sometimes your mind changes and you don’t want it to change, and I feel like “Boy, I liked it much better when I just thought that the future was always going to be great and there’s not any kind of problem”…

…The thing about this book that I found really convincing is that it used no metaphors at all. It was one of these books which laid out its basic assumptions, and then just follows them through to a conclusion… The book is just very thorough at trying to go down every path and every combination of [assumptions], and what I realized was… “Oh, I just never did sit down and think through this position [that it will eventually be possible to build general-purpose AI] to its logical conclusion.”

Another interesting section begins at 1:46:35 and runs through about 1:52:00.

The silly history of spinach

From Arbesman’s The Half-Life of Facts:

One of the strangest examples of the spread of error is related to an article in the British Medical Journal from 1981. In it, the immunohematologist Terry Hamblin discusses incorrect medical information, including a wonderful story about spinach. He details how, due to a typo, the amount of iron in spinach was thought to be ten times higher than it actually is. While there are only 3.5 milligrams of iron in a 100-gram serving of spinach, the accepted fact became that spinach contained 35 milligrams of iron. Hamblin argues that German scientists debunked this in the 1930s, but the misinformation continued to spread far and wide.

According to Hamblin, the spread of this mistake even led to spinach becoming Popeye the Sailor’s food choice. When Popeye was created, it was recommended he eat spinach for his strength, due to its vaunted iron-based health properties.

This wonderful case of a typo that led to so much incorrect thinking was taken up for decades as a delightful, and somewhat paradigmatic, example of how wrong information could spread widely. The trouble is, the story itself isn’t correct.

While the amount of iron in spinach did seem to be incorrectly reported in the nineteenth century, it was likely due to a confusion between iron oxide—a related chemical—and iron, or contamination in the experiments, rather than a typographical error. The error was corrected relatively rapidly, over the course of years, rather than over many decades.

Mike Sutton, a reader in criminology at Nottingham Trent University, debunked the entire original story several years ago through a careful examination of the literature. He even discovered that Popeye seems to have eaten spinach not for its supposed high quantities of iron, but rather due to vitamin A. While the truth behind the myth is still being excavated, this misinformation — the myth of the error — from over thirty years ago continues to spread.

Geoff Hinton on long-term AI outcomes

Geoff Hinton on a show called The Agenda (starting around 9:40):

Interviewer: How many years away do you think we are from a neural network being able to do anything that a brain can do?

Hinton: …I don’t think it will happen in the next five years but beyond that it’s all a kind of fog.

Interviewer: Is there anything about this that makes you nervous?

Hinton: In the very long run, yes. I mean obviously having… [AIs] more intelligent than us is something to be nervous about. It’s not gonna happen for a long time but it is something to be nervous about.

Interviewer: What aspect of it makes you nervous?

Hinton: Will they be nice to us?

Bill Gates on AI timelines

On the latest episode of The Ezra Klein Show, Bill Gates elaborated a bit on his views about AI timelines (starting around 24:40):

Klein: I know you take… the risk of creating artificial intelligence that… ends up turning against us pretty seriously. I’m curious where you think we are in terms of creating an artificial intelligence…

Gates: Well, with robotics you have to think of three different milestones.

One is… not-highly-trained labor substitution. Driving, security guard, warehouse work, waiter, maid — things that are largely visual and physical manipulation… [for] that threshold I don’t think you’d get much disagreement that over the next 15 years that the robotic equivalents in terms of cost [and] reliability will become a substitute to those activities…

Then there’s the point at which what we think of as intelligent activities, like writing contracts or doing diagnosis or writing software code, when will the computer start to… have the capacity to work in those areas? There you’d get more disagreement… some would say 30 years, I’d be there. Some would say 60 years. Some might not even see that [happening].

Then there’s a third threshold where the intelligence involved is dramatically better than humanity as a whole, what Bostrom called a “superintelligence.” There you’re gonna get a huge range of views including people who say it won’t ever happen. Ray Kurzweil says it will happen at midnight on July 13, 2045 or something like that and that it’ll all be good. Then you have other people who say it can never happen. Then… there’s a group that I’m more among where you say… we’re not able to predict it, but it’s something that should start thinking about. We shouldn’t restrict activities or slow things down… [but] the potential that that exists even in a 50-year timeframe [means] it’s something to be taken seriously.

But those are different thresholds, and the responses are different.

See Gates’ previous comments on AI timelines and AI risk, here.

UPDATE 07/01/2016: In this video, Gates says that achieving “human-level” AI will take “at least 5 times as long as what Ray Kurzweil says.”

Sutskever on Talking Machines

The latest episode of Talking Machines features an interview with Ilya Sutskever, the research director at OpenAI. His comments on long-term AI safety in particular were (starting around 28:10):

Interviewer: There’s a part of [OpenAI’s introductory blog post] that I found particularly interesting, which says “It’s hard to fathom It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.” So what are the reasonable questions that we should be thinking about in terms of safety now? …

Sutskever: … I think, and many people think, that full human-level AI … might perhaps be invented in some number of decades … [and] will obviously have a huge, inconceivable impact on society. That’s obvious. And when a technology will predictably have as much impact, there is nothing to lose from starting to think about the nature of this impact … and also whether there is any research that can be done today that will make this impact be more like the kind of impact we want.

The question of safety really boils down to this: …If you look at our neural networks that for example recognize images, they’re doing a pretty good job but once in a while they make errors [and it’s] hard to understand where they come from.

For example I use Google photo search to index my own photos… and it’s really accurate almost all the time, but sometimes I’ll search for a photo of a dog, let’s say, and it will find a photo [that is] clearly not a dog. Why does it make this mistake? You could say “Who cares? It’s just object recognition,” and I agree. But if you look down the line, what you’ll see is that right now we are [just beginning to] create agents, for example the Atari work of DeepMind or the robotics work of Berkeley, where you’re building a neural network that learns to control something which interacts with the world. At present, their cost functions [i.e. goal functions] are manually specified. But it… seems likely that eventually we will be building robots whose cost functions will be learned from demonstration, or from watching a YouTube video, or from the interpretation of natural text…

So now you have these really complicated cost functions that are difficult to understand, and you have a physical robot or some kind of software system which tries to optimize this cost function, and I think these are the kinds of scenarios that could be relevant for AI safety questions. Once you have a system like this, what do you need to do to be reasonably certain that it will do what you want it to do?

…because we don’t work on such systems [today], these questions may seem a bit premature, but once we start building reinforcement learning systems [which] do learn the cost function, I think this question will become much more sharply in focus. Of course it would also be nice to do theoretical research, but it’s not clear to me how it could be done.

Interviewer: So right now we have the opportunity to understand the fundamentals… and then apply them later as the research continues and grows and is able to create more powerful systems?

Sutskever: That would be the ideal case, definitely. I think it’s worth trying to do that. I think it may also be hard to do because it seems like we have such a hard time imagining [what] these future systems will look like. We can speak in general terms: Yes, there will be a cost function most likely. But how, exactly, will it be optimized? It’s a little hard to predict because if you could predict it we could just go ahead and build the systems already.

Applying economics to the law for the first time

Teles (2010) quotes Douglas Baird, a Stanford Law student in the 70s and later dean of the University of Chicago Law School:

In the early seventies, people like Posner would come in and spend six weeks studying family law, and they’d write a couple of articles explaining why everything everyone was saying in family law was 100 percent wrong [because they’d ignored economics].  And then the replies would be, “No, we were only 80 percent wrong.” And Posner never got things exactly right, but he always turned everything upside down, and people talked about law differently… By the time I came along, and I wasn’t trained as economist, it was clear that… doing great work was easy… I used to say that this was just like knocking over Coke bottles with a baseball bat… You could just go in and write something revolutionary and go in tomorrow and write another article. I remember writing articles where the time between getting the idea and getting it accepted from a major law review was four days. I’m not Richard Posner, and few of us are. I got out of law school, and I was interested in bankruptcy law, which was inhabited by intellectual midgets… It was a complete intellectual wasteland. I got tenure by saying, “Jeez, a dollar today is worth more than a dollar tomorrow.” You got tenure for that! The reality is that there was just an open field begging for people to do great work.