Chomsky on the War on Drugs

More Chomsky, from Understanding Power (footnotes also reproduced):

So take a significant question you never hear asked despite this supposed “Drug War” which has been going on for years and years: how many bankers and chemical corporation executives are in prison in the United States for drug-related offenses? Well, there was recently an O.E.C.D. [Organization for Economic Cooperation and Development] study of the international drug racket, and they estimated that about a half-trillion dollars of drug money gets laundered internationally every year—more than half of it through American banks. I mean, everybody talks about Colombia as the center of drug-money laundering, but they’re a small player: they have about $10 billion going through, U.S. banks have about $260 billion.1

Okay, that’s serious crime—it’s not like robbing a grocery store. So American bankers are laundering huge amounts of drug money, everybody knows it: how many bankers are in jail? None. But if a black kid gets caught with a joint, he goes to jail.

…Or why not ask another question — how many U.S. chemical corporation executives are in jail? Well, in the 1980s, the C.I.A. was asked to do a study on chemical exports to Latin America, and what they estimated was that more than 90 percent of them are not being used for industrial production at all — and if you look at the kinds of chemicals they are, it’s obvious that what they’re really being used for is drug production.2 Okay, how many chemical corporation executives are in jail in the United States? Again, none — because social policy is not directed against the rich, it’s directed against the poor.

  1. On the O.E.C.D. study, see for example, Apolinar Biaz-Callejas [of the Andean Commission of Jurists and Latin American Association for Human Rights], “Violence in Colombia, its History,” Latin America News Update (Chicago, IL), Vol. 10, No. 12, December 1994, pp. 19-20 (from Excelsior of Mexico City, October 14, 1994). An excerpt: “According to the Organization for Economic Cooperation and Development, the money produced by drug trafficking throughout the world reached $460 billion in 1993, of which the U.S. received $260 billion, which is circulated through its financial system, as contraband, and through other ways. Colombia, as a producer-exporter, gets only $5 to $7 billion, or 2 to 3% of what remains in the U.S. The big business is, therefore, in that country.”

    See also, Alexander Cockburn and Jeffrey St. Clair, Whiteout: The C.I.A., Drugs and the Press, London: Verso, 1998, pp. 365-371. []

  2. On the C.I.A.’s study, see for example, Nicholas C. McBride, “Bill would regulate chemical exports,” Christian Science Monitor, July 27,1988, p. 3. An excerpt: “A report obtained from the Central Intelligence Agency says that since 1983, there has been a sharp increase in Latin American imports of chemicals used to manufacture illegal drugs, among other purposes. It concludes that the imports far exceed those necessary for legitimate uses. Most of the chemicals are produced in the U.S.… ‘Ninety-five percent of the chemicals necessary to manufacture cocaine in Latin America originate in the United States,’ says Gene R. Haislip, a deputy assistant administrator for the federal Drug Enforcement Administration.”

    Douglas Jehl, “Cocaine Has A Made In U.S.A. Label; American Firms Make Most Of The Solvents That Routinely Wind Up In Colombian Cocaine Labs — That Chemical Trail Is Surprisingly Easy To Follow,” Los Angeles Times, December 5,1989, p. A1; Brook Larmer, “U.S., Mexico Try to Halt Chemical Flow to Cartels: Latin drug lords rely almost wholly on U.S-made products to turn coca into cocaine,” Christian Science Monitor, October 23, 1989, p. 1. []

Alex Ross on Mozart, and on the avantgarde

From The Storm of Style:

What Mozart might have done next [if he hadn’t died young] is no one’s guess. The pieces that emerged from the suddenly productive year 1791 — The Magic Flute, the ultimate Leopoldian synthesis of high and low; La Clemenza di Tito, a robust revival of the aging art of opera seria; the silken lyricism of the Clarinet Concerto; the Requiem, at once cerebral and raw — form a garden of forking paths. Mozart was still a young man, discovering what he could do. In the unimaginable alternate universe in which he lived to the age of seventy, an anniversary-year essay might have contained a sentence such as this: “Opera houses focus on the great works of Mozart’s maturity — The Tempest, Hamlet, the two-part Faust — but it would be a good thing if we occasionally heard that flawed yet lively work of his youth, Don Giovanni.”

And, on a totally different topic, from Listen to This:

Picture music as a map, and musical genres as continents— classical music as Europe, jazz as America, rock as Asia. Each genre has its distinct culture of playing and listening. Between the genres are the cold oceans of taste, which can be cruel to musicians who try to cross over. There are always brave souls willing to make the attempt: Aretha Franklin sings “Nessun dorma” at the Grammys; Paul McCartney writes a symphony; violinists perform on British TV in punk regalia or lingerie. Such exploits get the kind of giddy attention that used to greet early aeronautical feats like Charles Lindbergh’s solo flight and the maiden voyage of the Hindenburg. There is another route between genres.

It’s the avant-garde path— a kind of icy Northern Passage that you can traverse on foot. Practitioners of free jazz, underground rock, and avant-garde classical music are, in fact, closer to one another than they are to their less radical colleagues. Listeners, too, can make unexpected connections in this territory. As I discovered in my college years, it is easy to go from the orchestral hurly-burly of Xenakis and Penderecki to the free-jazz piano of Cecil Taylor and the dissonant rock of Sonic Youth. For lack of a better term, call it the art of noise.

“Noise” is a tricky word that quickly slides into the pejorative. Often, it’s the word we use to describe a new kind of music that we don’t understand. Variations on the put-down “That’s just noise” were heard at the premiere of Stravinsky’s Rite of Spring, during Dylan’s first tours with a band, and on street corners when kids started blasting rap. But “noise” can also accurately describe an acoustical phenomenon, and it needn’t be negative. Human ears are attracted to certain euphonious chords based on the overtone series; when musicians pile on too many extraneous tones, the ear “maxes out.” This is the reason that free jazz, experimental rock, and experimental classical music seem to be speaking the same language: from the perspective of the panicking ear, they are. It’s a question not of volume but of density. There is, however, pleasure to be had in the kind of harmonic density that shatters into noise. The pleasure comes in the control of chaos, in the movement back and forth across the border of what is comprehensible.

Academic music criticism of popular music

Academic writing on music is often pretty amusing and interesting, especially when it discusses popular artists.

For example, here is Brad Osborn on Radiohead, from a recent issue of Perspectives of New Music:

The British rock group Radiohead has carved out a unique place in the post-millennial rock milieu by tempering their highly experimental idiolect with structures more commonly heard in Top Forty rock styles. In what I describe as a Goldilocks principle, much of their music after OK Computer (1997) inhabits a space between banal convention and sheer experimentation — a dichotomy which I have elsewhere dubbed the ‘Spears–Stockhausen Continuum.’ In the timbral domain, the band often introduces sounds rather foreign to rock music such as the ondes Martenot and highly processed lead vocals within textures otherwise dominated by guitar, bass, and drums (e.g., ‘The National Anthem,’ 2000), and song forms that begin with paradigmatic verse–chorus structures often end with new material instead of a recapitulated chorus (e.g., ‘All I Need,’ 2007). In this article I will demonstrate a particular rhythmic manifestation of this Goldilocks principle known as Euclidean rhythms. Euclidean rhythms inhabit a space between two rhythmic extremes, namely binary metrical structures with regular beat divisions and irregular, unpredictable groupings at multiple levels of structure.

[Read more…]

A few bites from Superforecasting

Wish I could get my hands on this:

Doug knows that when people read for pleasure they naturally gravitate to the like-minded. So he created a database containing hundreds of information sources—from the New York Times to obscure blogs—that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity. Thanks to Doug’s simple invention, he is sure to constantly encounter different perspectives.

[Read more…]

Effective altruism definitions

Everyone has their own, it seems:

…the basic tenet of Effective Altruism: leading an ethical life involves using a portion of personal assets and resources to effectively alleviate the consequences of extreme poverty.

From the latest Life You Can Save newsletter.

Yudkowsky on science and programming

There are serious disanalogies, too, but I still like this, from Eliezer Yudkowsky:

Lots of superforecasters [link] are programmers, it turns out, presumably for the same reason lots of programmers are correct contrarians of any other stripe. (My hypothesis is a mix of a lawful thinking gear, real intellectual difficulty of daily practice, and the fact that the practice of debugging is the only profession that has a fast loop for hypothesis formulation, testing, and admission of error. Programming is vastly more scientific than academic science.)

And later:

You’d need to have lived the lives of Newton, Lavoisier, Einstein, Fermi, and Kahneman all put together to be proven wrong about as many facts as a programmer unlearns in one year of debugging, though admittedly they’d be deeper and more emotionally significant facts.

Operators sleeping at the wrong time

An interesting sentence from Monk (2012):

Perhaps the two most dramatic examples of operators being asleep when they should have been awake are the airliner full of passengers that over-flew its U.S. West Coast airport and headed out over the Pacific with all of the flight crew asleep, and the Peach Bottom nuclear power plant in Pennsylvania, where regulators made an unexpected visit one night only to find everyone in the control room asleep (Lauber & Kayten, 1988).

Wiener on the AI control problem in 1960

Norbert Wiener in Science in 1960:

Similarly, when a machine constructed by us is capable of operating on its incoming data at a pace which we cannot keep, we may not know, until too late, when to turn it off. We all know the fable of the sorcerer’s apprentice, in which the boy makes the broom carry water in his master’s absence, so that it is on the point of drowning him when his master reappears. If the boy had had to seek a charm to stop the mischief in the grimoires of his master’s library, he might have been drowned before he had discovered the relevant incantation. Similarly, if a bottle factory is programmed on the basis of maximum productivity, the owner may be made bankrupt by the enormous inventory of unsalable bottles manufactured before he learns he should have stopped production six months earlier.

The “Sorcerer’s Apprentice” is only one of many tales based on the assumption that the agencies of magic are literal-minded. There is the story of the genie and the fisherman in the Arabian Nights, in which the fisher- man breaks the seal of Solomon which has imprisoned the genie and finds the genie vowed to his own destruction; there is the tale of the “Monkey’s Paw,” by W. W. Jacobs, in which the sergeant major brings back from India a talisman which has the power to grant each of three people three wishes. Of the first recipient of this talisman we are told only that his third wish is for death. The sergeant major, the second person whose wishes are granted, finds his experiences too terrible to relate. His friend, who receives the talisman, wishes first for £200. Shortly thereafter, an official of the factory in which his son works comes to tell him that his son has been killed in the machinery and that, without any admission of responsibility, the company is sending him as consolation the sum of £200. His next wish is that his son should come back, and the ghost knocks at the door. His third wish is that the ghost should go away.

Disastrous results are to be expected not merely in the world of fairy tales but in the real world wherever two agencies essentially foreign to each other are coupled in the attempt to achieve a common purpose. If the communication between these two agencies as to the nature of this purpose is incomplete, it must only be expected that the results of this cooperation will be unsatisfactory. If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, because the action is so fast and irrevocable that we have not the data to intervene before the action is complete, then we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it.

Arthur Samuel replied in the same issue with “a refutation”:

A machine is not a genie, it does not work by magic, it does not possess a will… The “intentions” which the machine seems to manifest are the intentions of the human programmer, as specified in advance, or they are subsidiary intentions derived from these, following rules specified by the programmer… There is (and logically there must always remain) a complete hiatus between (i) any ultimate extension and elaboration in this process of carrying out man’s wishes and (ii) the development within the machine of a will of its own. To believe otherwise is either to believe in magic or to believe that the existence of man’s will is an illusion and that man’s actions are as mechanical as the machine’s. Perhaps Wiener’s article and my rebuttal have both been mechanistically determined, but this I refuse to believe.

An apparent exception to these conclusions might be claimed for projected machines of the so-called “neural net” type… Since the internal connections would be unknown, the precise behavior of the nets would be unpredictable and, therefore, potentially dangerous… If practical machines of this type become a reality we will have to take a much closer look at their implications than either Wiener or I have been able to do.

Rhodes on nuclear security

Some quotes from Rhodes’ Arsenals of Folly.

#1:

In the 1950s, when the RBMK [nuclear power reactor] design was developed and approved, Soviet industry had not yet mastered the technology necessary to manufacture steel pressure vessels capacious enough to surround such large reactor cores. For that reason, among others, scientists, engineers, and managers in the Soviet nuclear-power industry had pretended for years that a loss-of-coolant accident was unlikely to the point of impossibility in an RBMK. They knew better. The industry had been plagued with disasters and near-disasters since its earliest days. All of them had been covered up, treated as state secrets; information about them was denied not only to the Soviet public but even to the industry’s managers and operators. Engineering is based on experience, including operating experience; treating design flaws and accidents as state secrets meant that every other similar nuclear-power station remained vulnerable and unprepared.

Unknown to the Soviet public and the world, at least thirteen serious power-reactor accidents had occurred in the Soviet Union before the one at Chernobyl. Between 1964 and 1979, for example, repeated fuel-assembly fires plagued Reactor Number One at the Beloyarsk nuclear-power plant east of the Urals near Novosibirsk. In 1975, the core of an RBMK reactor at the Leningrad plant partly melted down; cooling the core by flooding it with liquid nitrogen led to a discharge of radiation into the environment equivalent to about one-twentieth the amount that was released at Chernobyl in 1986. In 1982, a rupture of the central fuel assembly of Chernobyl Reactor Number One released radioactivity over the nearby bedroom community of Pripyat, now in 1986 once again exposed and at risk. In 1985, a steam relief valve burst during a shaky startup of Reactor Number One at the Balakovo nuclear-power plant, on the Volga River about 150 miles southwest of Samara, jetting 500-degree steam that scalded to death fourteen members of the start-up staff; despite the accident, the responsible official, Balakovo’s plant director, Viktor Bryukhanov, was promoted to supervise construction at Chernobyl and direct its operation.

[Read more…]

Chomsky on the Peters-Finkelstein affair

More Chomsky, again from Understanding Power (footnotes also reproduced):

Here’s a story which is really tragic… There was this best-seller a few years ago [in 1984], it went through about ten printings, by a woman named Joan Peters… called From Time Immemorial.1 It was a big scholarly-looking book with lots of footnotes, which purported to show that the Palestinians were all recent immigrants [i.e. to the Jewish-settled areas of the former Palestine, during the British mandate years of 1920 to 1948]. And it was very popular — it got literally hundreds of rave reviews, and no negative reviews: the Washington Post, the New York Times, everybody was just raving about it.2 Here was this book which proved that there were really no Palestinians! Of course, the implicit message was, if Israel kicks them all out there’s no moral issue, because they’re just recent immigrants who came in because the Jews had built up the country. And there was all kinds of demographic analysis in it, and a big professor of demography at the University of Chicago [Philip M. Hauser] authenticated it.3 That was the big intellectual hit for that year: Saul Bellow, Barbara Tuchman, everybody was talking about it as the greatest thing since chocolate cake.4

[Read more…]

  1. For Peters’s book, see Joan Peters, From Time Immemorial: The Origins of the Arab-Jewish Conflict Over Palestine, New York: Harper and Row, 1984. Scarcely eight months after its publication, the book went into its seventh printing, and Joan Peters reportedly had 250 speaking engagements scheduled for the upcoming year. []
  2. Some of the reviewers’ blurbs reprinted in the paperback edition of the Peters book include:

    • “This book is a historical event in itself.” (Barbara Tuchman)
    • “A superlative book… To understand what is happening in the Middle East, one must begin with its past, which Miss Peters traces to the present with unmatched skill.” (Theodore H. White)
    • “Every political issue claiming the attention of a world public has its ‘experts’  news managers, anchor men, ax grinders, and anglers. The great merit of this book is to demonstrate that, on the Palestinian issue, these experts speak from utter ignorance. Millions of people the world over, smothered by false history and propaganda, will be grateful for this clear account of the origins of the Palestinians.” (Saul Bellow)
    • Joan Peters’ book provides necessary demographic and historic perspectives which have been inexplicably and substantially ignored until now, but without which misconceptions and policy distortions are inevitable. The reader will be most impressed with the thoroughness and prodigious input this work entails, as I was.” (Philip M. Hauser, Director Emeritus, Population Research Center, The University of Chicago; former Acting Director of U.S. Census)
    • “Joan Peters strikes a heavy blow against the broad consensus about ‘the Palestinians’ and the assumption that Palestinian rights are at the heart of the Arab-Israeli conflict… From Time Immemorial supplies abundant justification for reversing the moral and legal presumptions that have cast Israel in the role of defendant before the court of world opinion.” (William V. O’Brien, Georgetown University)
    • “The massive research Ms. Peters did… would have daunted Hercules. In the course of it she turned up a great deal of interesting material from Ottoman records, the reports of Western consular officers and observant travelers and other sources.” (New York Times Book Review)
    • “A remarkable document in itself…The refugees are not the problem but the excuse.” (Washington Post Book World)
    • “Everything in this book reads like hard news… One woman walks in and scoops them all… The great service provided here by Mrs. Peters — if only attention is paid — is to lay a groundwork for peace by clearing away the farrago of lies.” (National Review)
    • “This book, if read, will change the mind of our generation. If understood, it could also affect the history of the future.” (New Republic)
    • “The reader comes away not only rethinking the Middle East refugee problem, but also the extent to which propaganda can be swallowed whole for lack of information.” (Los Angeles Times)
    • From Time Immemorial is impressive, informative, absorbing. All those who are interested in the Arab-Israeli questions will benefit from Joan Peters’s insight and analysis.” (Elie Wiesel)
    • From Time Immemorial will surely change the way we think about that still fiercely contested land once called Palestine. For Joan Peters has dug beneath a half-century’s accumulation of propaganda and brought into the light the historical truth about the Middle East. With a wealth of authoritative evidence, she exposes the tangle of lies and false claims by which the Arabs have tried to justify their unending violence. Everyone who hopes for peace in the Middle East between Jews and Arabs will want to read this book — will have to read this book.” (Lucy Dawidowicz)

    []

  3. On professor Hauser, see footnotes 19 and 25 of this chapter. []
  4. For Tuchman’s and others’ jubilation about the Peters book, see footnotes 19 and 25 of this chapter. []

Car-hacking

From Wired:

[Remote-controlling a Jeep] is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone. Uconnect, an Internet-connected computer feature in hundreds of thousands of Fiat Chrysler cars, SUVs, and trucks, controls the vehicle’s entertainment and navigation, enables phone calls, and even offers a Wi-Fi hot spot. And thanks to one vulnerable element… Uconnect’s cellular connection also lets anyone who knows the car’s IP address gain access from anywhere in the country.

Schlosser on nuclear security

Some quotes from Schlosser’s Command and Control.

#1:

On January 23, 1961, a B-52 bomber took off from Seymour Johnson Air Force Base in Goldsboro, North Carolina, for an airborne alert… [Near] midnight… the boom operator of [a refueling] tanker noticed fuel leaking from the B-52’ s right wing. Spray from the leak soon formed a wide plume, and within two minutes about forty thousand gallons of jet fuel had poured from the wing. The command post at Seymour Johnson told the pilot, Major Walter S. Tulloch, to dump the rest of the fuel in the ocean and prepare for an emergency landing. But fuel wouldn’t drain from the tank inside the left wing, creating a weight imbalance. At half past midnight, with the flaps down and the landing gear extended, the B-52 went into an uncontrolled spin…

The B-52 was carrying two Mark 39 hydrogen bombs, each with a yield of 4 megatons. As the aircraft spun downward, centrifugal forces pulled a lanyard in the cockpit. The lanyard was attached to the bomb release mechanism. When the lanyard was pulled, the locking pins were removed from one of the bombs. The Mark 39 fell from the plane. The arming wires were yanked out, and the bomb responded as though it had been deliberately released by the crew above a target. The pulse generator activated the low-voltage thermal batteries. The drogue parachute opened, and then the main chute. The barometric switches closed. The timer ran out, activating the high-voltage thermal batteries. The bomb hit the ground, and the piezoelectric crystals inside the nose crushed. They sent a firing signal. But the weapon didn’t detonate.

Every safety mechanism had failed, except one: the ready/safe switch in the cockpit. The switch was in the SAFE position when the bomb dropped. Had the switch been set to GROUND or AIR, the X-unit would’ve charged, the detonators would’ve triggered, and a thermonuclear weapon would have exploded in a field near Faro, North Carolina…

The other Mark 39 plummeted straight down and landed in a meadow just off Big Daddy’s Road, near the Nahunta Swamp. Its parachutes had failed to open. The high explosives did not detonate, and the primary was largely undamaged…

The Air Force assured the public that the two weapons had been unarmed and that there was never any risk of a nuclear explosion. Those statements were misleading. The T-249 control box and ready/safe switch, installed in every one of SAC’s bombers, had already raised concerns at Sandia. The switch required a low-voltage signal of brief duration to operate — and that kind of signal could easily be provided by a stray wire or a short circuit, as a B-52 full of electronic equipment disintegrated midair.

A year after the North Carolina accident, a SAC ground crew removed four Mark 28 bombs from a B-47 bomber and noticed that all of the weapons were armed. But the seal on the ready/ safe switch in the cockpit was intact, and the knob hadn’t been turned to GROUND or AIR. The bombs had not been armed by the crew. A seven-month investigation by Sandia found that a tiny metal nut had come off a screw inside the plane and lodged against an unused radar-heating circuit. The nut had created a new electrical pathway, allowing current to reach an arming line— and bypass the ready/ safe switch. A similar glitch on the B-52 that crashed near Goldsboro would have caused a 4-megaton thermonuclear explosion. “It would have been bad news— in spades,” Parker F. Jones, a safety engineer at Sandia, wrote in a memo about the accident. “One simple, dynamo-technology, low-voltage switch stood between the United States and a major catastrophe!”

[Read more…]

Chomsky on overthrowing third world governments

Noam Chomsky is worth reading because he’s an articulate, well-informed, sources-citing defender of unconventional views rarely encountered in mainstream venues. It’s hard for me to evaluate his views because he isn’t very systematic in his presentations of evidence for his core political theses — but then, hardly anybody is. But whether his views are fair or not, I think it’s good to stick my head outside the echo chamber regularly.1

Personally, I’m most interested in his perspectives on plutocracy, international relations, and state violence. On those topics, Understanding Power (+450 pages of footnotes) is a pretty good introduction to his views.

To give you a feel for the book, I’ll excerpt a passage from chapter 1 of Understanding Power about overthrowing third world governments. I’ve also reproduced the (renumbered) footnotes for this passage.

[Read more…]

  1. Just to get this out of the way: Yes, it seems Chomsky initially misread the evidence for the scale of the Cambodian genocide, and he was slow to admit his error, and that’s pretty bad. That doesn’t make him an “apologist” for the Khmer Rouge, and it doesn’t mean he’s wrong about everything else he has said. []

Feinstein on the global arms trade

Some quotes from Feinstein’s The Shadow World: Inside the Global Arms Trade.

#1:

The £75m Airbus, painted in the colours of the [Saudi] Prince’s beloved Dallas Cowboys, was a gift from the British arms company BAE Systems. It was a token of gratitude for the Prince’s role, as son of the country’s Defence Minister, in the biggest arms deal the world has seen. The Al Yamamah – ‘the dove’ – deal signed between the United Kingdom and Saudi Arabia in 1985 was worth over £40bn. It was also arguably the most corrupt transaction in trading history. Over £1bn was paid into accounts controlled by Bandar. The Airbus – maintained and operated by BAE at least until 2007 – was a little extra, presented to Bandar on his birthday in 1988.

A significant portion of the more than £1bn was paid into personal and Saudi embassy accounts at the venerable Riggs Bank opposite the White House on Pennsylvania Avenue, Washington DC. The bank of choice for Presidents, ambassadors and embassies had close ties to the CIA, with several bank officers holding full agency security clearance. Jonathan Bush, uncle of the President, was a senior executive of the bank at the time. But Riggs and the White House were stunned by the revelation that from 1999 money had inadvertently flowed from the account of Prince Bandar’s wife to two of the fifteen Saudis among the 9/11 hijackers.

[Read more…]

Chomsky on Diaperology

An amusing excerpt from Chomsky’s Understanding Power (footnote also reproduced):

In the late 1940s, the United States just ran [the U.N.] completely — international relations of power were such that the U.S. just gave the orders and everybody followed, because the rest of the world was smashed up and starving after the Second World War. And at the time, everybody here [in the U.S.] loved the U.N., because it always went along with us: every way we told countries to vote, they voted. Actually, when I was a graduate student around 1950, major social scientists, people like Margaret Mead, were trying to explain why the Russians were always saying “no” at the U.N. — because here was the United States putting through these resolutions and everybody was voting “yes,” then the Russians would stand up and say “no.” So of course they went to the experts, the social scientists, to figure it out. And what they came up with was something we used to call “diaperology”; the conclusion was, the reason the Russians always say “no” at the U.N. is because they raise their infants with swaddling clothes… Literally — they raise their infants with swaddling clothes in Russia, so Russians end up very negative, and by the time they make it to the U.N. all they want to do is say “no” all the time. That was literally proposed, people took it seriously, there were articles in the journals about it, and so on.1

  1. For “diaperology,” see for example, Margaret Mead, “What Makes The Soviet Character?,” Natural History, September 1951, pp. 296f. An excerpt: “The Russian baby was swaddled, as were most of the infants of Eastern peoples and as Western European infants used to be, but they were swaddled tighter and longer than were, for example, their neighbors, the Poles… This early period seems to have left a stronger impression on Russian character than the same period of learning does for members of many other societies in which the parents are more preoccupied with teaching skills appropriate to later stages of development… So we find in traditional Russian character elaborated forms of these very early learnings. There is a tendency to confuse thought and action, a capacity for impersonal anger as at the constriction of the swaddling bands… We may expect everything we do to look different to them from the way it looks to us… In communicating with people who think as differently as this, successful plans either for limited co-operation in the attainment of partial world goals or for active opposition depend upon our getting an accurate estimate of what the Soviet people of today are like. We must know just what the differences in their thinking and feeling are.” []

Morris’ thesis in Foragers, Farmers, and Fossil Fuels

From the opening of chapter 5:

I suggested that modern human values initially emerged somewhere around 100,000 years ago (±50,000 years) as a consequence of the biological evolution of our big, fast brains, and that once we had our big, fast brains, cultural evolution became a possibility too. Because of cultural evolution, human values have mutated rapidly in the last twenty thousand years, and the pace of change has accelerated in the last two hundred years.

I identified three major stages in human values, which I linked to foraging, farming, and fossil-fuel societies. My main point was that in each case, modes of energy capture determined population size and density, which in turn largely determined which forms of social organization worked best, which went on to make certain sets of values more successful and attractive than others.

Foragers, I observed, overwhelmingly lived in small, low-density groups, and generally saw political and wealth hierarchies as bad things. They were more tolerant of gender hierarchy, and (by modern lights) surprisingly tolerant of violence. Farmers lived in bigger, denser communities, and generally saw steep political, wealth, and gender hierarchies as fine. They had much less patience than foragers, though, for interpersonal violence, and restricted its range of legitimate uses more narrowly. Fossil-fuel folk live in bigger, denser communities still. They tend to see political and gender hierarchy as bad things, and violence as particularly evil, but they are generally more tolerant of wealth hierarchies than foragers, although not so tolerant as farmers.

Pinker still confused about AI risk

Ack! Steven Pinker still thinks AI risk worries are worries about malevolent AI, despite multiple attempts to correct his misimpression:

John Lily: Silicon Valley techies are divided about whether to be fearful or dismissive of the idea of new super intelligent AI… How would you approach this issue?

Steven Pinker: …I think it’s a fallacy to conflate the ability to reason and solve problems with the desire to dominate and destroy, which sci-fi dystopias and robots-run-amok plots inevitably do. It’s a projection of evolved alpha-male psychology onto the concept of intelligence… So I don’t think that malevolent robotics is one of the world’s pressing problems.

Will someone please tell him to read… gosh, anything on the issue that isn’t a news story? He could also watch this talk by Stuart Russell if that’s preferable.

Minsky on AI risk in the 80s and 90s

Follow-up to: AI researchers on AI risk; Fredkin on AI risk in 1979.

Marvin Minsky is another AI scientist who has been thinking about AI risk for a long time, at least since the 1980s. Here he is in a 1983 afterword to Vinge’s novel True Names:1

The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful… It will be tempting to do this, not just for the gain in power, but just to decrease our own human effort in the consideration and formulation of our own desires. If some genie offered you three wishes, would not your first one be, “Tell me, please, what is it that I want to the most!” The problem is that, with such powerful machines, it would require but the slightest powerful accident of careless design for them to place their goals ahead of ours, perhaps the well-meaning purpose of protecting us from ourselves, as in With Folded Hands, by Jack Williamson; or to protect us from an unsuspected enemy, as in Colossus by D.H. Jones…

And according to Eric Drexler (2015), Minsky was making the now-standard “dangerous-to-humans resource acquisition is a natural subgoal of almost any final goal” argument at least as early as 1990:

My concerns regarding AI risk, which center on the challenges of long-term AI governance, date from the inception of my studies of advanced molecular technologies, ca. 1977. I recall a later conversation with Marvin Minsky (they chairing my doctoral committee, ca. 1990) that sharpened my understanding of some of the crucial considerations: Regarding goal hierarchies, Marvin remarked that the high-level task of learning language is, for an infant, a subgoal of getting a drink of water, and that converting the resources of the universe into computers is a potential subgoal of a machine attempting to play perfect chess.

 

  1. An online copy of the afterword is available here, though has been slightly modified from the original. I am quoting from the original, which was written in 1983. []

Fredkin on AI risk in 1979

Recently, Ramez Naam posted What Do AI Researchers Think of the Risks of AI? while guest-blogging at Marginal Revolution. Naam quoted several risk skeptics like Ng and Etzioni, while conspicuously neglecting to mention any prominent AI people who take the risk seriously, such as RussellHorvitz, and Legg. Scott Alexander at Slate Star Codex replied by quoting several prominent AI scientists past and present who seem to have taken the risk seriously. And let’s not forget that the leading AI textbook, by Russell and Norvig, devotes 3.5 pages to potential existential catastrophe from advanced AI, and cites MIRI’s work specifically.

Luckily we can get a clearer picture of current expert opinion by looking at the results of a recent survey which asked the top 100 most-cited living AI scientists when they thought AGI would arrive, how soon after AGI we’d get superintelligence, and what the likely social impact of superintelligence would be.1

But at the moment, I just want to mention one additional computer scientist who seems to have been concerned about AI risk for a long time: Ed Fredkin.2

In Pamela McCorduck’s history of the first few decades of AI, Machines Who Think (1979), Fredkin is quoted extensively on AI risk. Fredkin said (ch. 14):

Eventually, no matter what we do there’ll be artificial intelligences with independent goals. In pretty much convinced of that. There may be a way to postpone it. There may even be a way to avoid it, I don’t know. But its very hard to have a machine that’s a million times smarter than you as your slave.

…And pulling the plug is no way out. A machine that smart could act in ways that would guarantee that the plug doesn’t get pulled under any circumstances, regardless of its real motives — if it has any.

…I can’t persuade anyone else in the field to worry this way… They get annoyed when I mention these things. They have lots of attitudes, of course, but one of them is, “Well yes, you’re right, but it would be a great disservice to the world to mention all this.”…my colleagues only tell me to wait, not to make my pitch until it’s more obvious that we’ll have artificial intelligences. I think by then it’ll be too late. Once artificial intelligences start getting smart, they’re going to be very smart very fast. What’s taken humans and their society tens of thousands of years is going to be a matter of hours with artificial intelligences. If that happens at Stanford, say, the Stanford AI lab may have immense power all of a sudden. It’s not that the United States might take over the world, it’s that Stanford AI Lab might.

…And so what I’m trying to do is take steps to see that… an international laboratory gets formed, and that these ideas get into the minds of enough people. McCarthy, for lots of reasons, resists this idea, because he thinks the Russians would be untrustworthy in such an enterprise, that they’d swallow as much of the technology as they could, contribute nothing, and meanwhile set up a shadow place of their own running at the exact limit of technology that they could get from the joint effort. And as soon as that made some progress, keep it secret from the rest of us so they could pull ahead… Yes, he might be right, but it doesn’t matter. The international laboratory is by far the best plan; I’ve heard of no better plan. I still would like to see it happen: lets be active instead of passive…

…There are three events of equal importance, if you like. Event one is the creation of the universe. It’s a fairly important event. Event two is the appearance of life. Life is a kind of organizing principle which one might argue against if one didn’t understand enough — shouldn’t or couldn’t happen on thermodynamic grounds, or some such. And, third, there’s the appearance of artificial intelligence. It’s the question which deals with all questions… If there are any questions to be answered, this is how they’ll be answered. There can’t be anything of more consequence to happen on this planet.

Fredkin, now 80, continues to think about AI risk — about the relevance of certification to advanced AI systems, about the race between AI safety knowledge and AI capabilities knowledge, etc.3 I’d be very curious to learn what Fredkin thinks of the arguments in Superintelligence.

  1. The short story is:

    1. The median estimate was that there was a 50% chance of AGI by 2050, and a 90% chance of AGI by 2070.
    2. The median estimate on AGI-to-superintelligence timing was that there was a 10% chance of superintelligence within 2 years of AGI, and a 75% chance of superintelligence within 30 years of AGI.
    3. When asked whether the social impact of superintelligence would be “extremely bad” or “extremely good” or somewhere in-between, the experts tended to think good outcomes were more likely than bad outcomes, but not super-confidently. (See section 3.4 of the paper.)

    []

  2. This isn’t to say Fredkin had, in 1979, anything like the Bostrom-Yudkowsky view on AI risk. For example he seems to have thought that most of the risk is during a transition period, and that once machines are superintelligent they will be able to discern our true motives. The Bostrom-Yudkowsky school would reply that “the genie knows but doesn’t care” (also see Superintelligence, p. 121). []
  3. I learned this via Yudkowsky, who had some communication with Fredkin in 2013. []

How much recent investment in AI?

Stuart Russell:

Industry [has probably invested] more in the last 5 years than governments have invested since the beginning of the field [in the 1950s].

My guess is that Russell doesn’t have a source for this, and this is just his guess based on his history in the field and his knowledge of what’s been happening lately. But it might very well be true; I’m not sure.

Also see How Big is the Field of Artificial Intelligence?