Funny or interesting Scaruffi Quotes (part 4)

Previously: 1, 2, 3.

On Zeni Geva (here):

Zeni Geva indulged in dissonant and gloomy orgies, in the tradition of early Swans and Big Black (but with no bass), on albums such as Maximum Money Monster (1990), Desire For Agony (1993), and especially Total Castration (1992). Null’s solo work, notably Absolute Heaven (1994) and Ultimate Material II (1995), continued to straddle the border between extreme noise and very extreme noise.

On Merzbow (here and here):

Merzbow, the brainchild of Masami Akita, one of the most prolific musicians of all times (not a compliment), was a theoretician of surrealism in music but practiced a form of savage violence that was more akin to a suicide bombing on non-musical works such as Rainbow Electronics (1990), Music For Bondage Performance (1991), Venereology (1994) and Tauromachine (1998).

Merzbox (1997) is a box of 50 CDs that “summarizes” his career, when he has just passed the record of the 200th album. It includes 30 reprints of CDs, LPs and cassettes, as well as 20 unreleased albums.

…It is difficult to tell Whether Dharma (2001) is a masterpiece or another Merzbow self-parody … but maybe that’s precisely what Merzbow is all about. One of their most savage noise recordings, it includes the massive (32 minutes), gargantuan, arcane musique concrete of “Frozen Guitars and Sunloop / 7E 802,” that after eight minutes turned into a maelstrom exuding a sense of desperation and after sixteen enters an endless free fall, besides the crescendo of “I’m Coming to the Garden No Sound No Memory,” that achieves a screeching intensity, the nuclear carpet bombing of “Akashiman,” and the eight-minute chamber composition “Space Plan For Marimo Kitty” for random piano notes and alien electronic interference.

By the same token, on Frog (2001), a sequence of variations on frogs, Masami Akita seems to make fun of the fans who take him seriously.

[Read more…]

William Rathbone, effective altruist?

William Rathbone (1819-1902), writing in 1867 about philanthropy:

It is true that there is among the rich much desultory and indolent goodwill towards the poor… which, if properly stimulated by a sense of positive and imperative obligation, and guided to a safe and effectual mode of action, might be made instrumental of much good at present left undone. It is true that a new hospital finds plenty of rich men willing to give money for its establishment and support; that any striking case of distress, calculated to touch the sympathies of the public, which may be recorded in the newspapers, generally attracts a superabundance of charitable donations… Probably, in by far the greater number of instances, the feeling that prompts them is one of genuine compassion. But it would be wrong to ascribe much merit to such emotional liberality; to look upon it as proof that the rich are properly sensible of their duties and responsibilities. The desultory nature of so much of our charity; the stimulus it requires from fancy-balls and bazaars; the greater facility with which a new institution obtains subscriptions for want of which an old one, equally meritorious, languishes; the amount of time and energy which the managers of a charity are so often forced to consume in drumming together the funds required for its support — time and energy which should be devoted to the mere task of efficient management — all these are significant evidence that the manifestations of generosity of which we hear so much proceed not from a strong and clear sense of duty, but from a vague sentiment of compassion; that people give less in obedience to principle than under, a sudden impulse of feeling, less to fulfill an obligation than to relieve themselves of an uneasy though vague sensation of compunction. Few among the rich realize that charity is not a virtue of supererogation, but a divine charge upon their wealth, which they have no right to neglect. They give to this or that family whose story interests them, to this or that institution for the relief of some form of‘ distress which peculiarly touches their sympathies, with no idea that the matter is not one in which they have a right to indulge their caprice; that all the misery within their sphere is an evil with which it is their duty to grapple, to which they are bound to apply the remedial energies and resources at their command, not as suits their taste or fancy, but as may be most efficacious in the relief of suffering… In short, charity is with them a matter of sentiment, not of principle…

…Do the rich give as large a proportion of their incomes, even, as these poorer contributors? They should do much more, for they can afford much more. £50 represents a much larger deduction from the real comforts and enjoyments procurable with an in come of £500, than does £500 taken from an income of £5000. As expenditure increases it is less on necessaries and more on luxuries; even its power of giving proportionate enjoyment to the possessor diminishes. The man who increases his expenditure from £1000 to £2000 may perhaps — though it is doubtful — get a thousand pounds worth of increased enjoyment from the addition. But if so, he certainly does not get an equal increase when he goes on from £2000 to £3000 or from £3000 to £4000. The larger the expenditure, the less the proportion of pleasure derived to money laid out. And therefore, both because the deduction involves a less sacrifice, and because it is just and reasonable to hold that money should be so spent as to produce a reasonable return of enjoyment to some one, it may fairly be urged that the larger the income, the larger should be the proportion spent in charity… Unhappily it is the fact that men of large means generally —for there are exceptions — spend a smaller percentage of those means in charity than do men of limited incomes…

Rhodri Davies (around 22m) adds that Rathbone also grappled with the question of “earning to give” vs. “direct work”, saying:

Margaret Simey’s book… says that Rathbone was torn between… whether he should go into the ministry and help the poor directly or whether he should go into business, and eventually [she writes] “viewing the issue in the light of common sense, [Rathbone] decided that for him, an effective life of public service would depend on his possession of the influence and respect secured by success in business. Accordingly, he set himself doggedly to the task of building up the family fortunes, which had suffered from the devotion of his father and grandfather to public work.” So he took his own self interest out of it — because he probably would have preferred to work directly with the poor — but he thought that actually what [he] should do is go off and maximize the amount of money he could make and [maximize] his political influence and connections, and then use [those things] to do the maximum amount of good.

Funny or interesting Scaruffi Quotes (part 3)

Previously: 1, 2.

On Sonic Youth (Google translated):

Sonic Youth have embodied the figure of the musician who intends to transcend the stereotypes of his time and explore new musical forms while remaining faithful to a nihilistic and alienated ethics like that of punks. In this sense the Sonic Youth are both heir to both punk-rock and new-wave, although they have little in common with them either musically or sociologically. Their origins are in avant-garde classical music, their vocations (as the solo works have shown) are the creative jazz and rock music, their personalities belong to the galleries of art and the intellectual circles of New York. Contrary to what might seem at first listening, the Sonic Youth have never repudiated the rock song format. Their formation is the typical guitar quartet of rock music. Their songs are almost always structured around a theme and contained within three or four minutes. Even in their most experimental moments, the Sonic Youth have followed their rock and roll roots.

[Read more…]

Some funny or interesting Scaruffi quotes (part 2)

(Previously.)

On Black Sabbath (Google translated):

Rarely an artist so poorly equipped technically and so unimaginative has had such a great influence on subsequent generations…

Black Sabbath were a constant assault on the cultured tradition of Western civilization, and a continued exaltation of barbarism and primitivism. They were hated by almost everyone: by the hippies (of which they represented the exact opposite moral), by the rockers (who were horrified by their technical inadequacy), by the singer-songwriters (who wrote much more meaningful lyrics). But the average teenager did not have any culture or the vocation to judge Black Sabbath music and, all things considered, their harmonic simplicity represented a form of collective appeal much easier to understand than the King Crimson symphonic poems or the Pink Floyd psychedelic scores. Black Sabbath fans were dirty and bad, but actually they were listening to Black Sabbath for the same reason that the previous generation of clean and good teenagers had listened to The Beatles: their music was the easiest to hear. Listening to their music was a simple act of collective ritualism that required no culture and no intelligence. But, unlike the Beatles’ fans (who at most became light music singers), the teenagers who identified themselves with the “ease” of Black Sabbath music were just those who would have formed rock music bands: the Black Sabbath were spreading an alien virus, that of heavy metal.

On Kanye West:

[In 2018] he released “Lift Yourself” that has perhaps his best lyrics ever: Poopy-di scoop / Scoop-diddy-whoop / Whoop-di-scoop-di-poop.

The album Ye… wasn’t even an album: at 23 minutes, it was just an EP. The songs are clumsy and goofy. The best one is “Ghost Town,” because it takes the melody from Shirley Ann Lee’s “Someday,” the organ from Vanilla Fudge’s “Take Me For A Little While,” and because of guest female vocalist Danielle Balbuena, aka 070 Shake. (The only reason that i mention this song is that, if i don’t mention any song, his fans will accuse me of not having listened to the album, but i refuse to publicize any other song).

[Read more…]

Excerpts from The Doomsday Machine

Daniel Ellsberg of Pentagon Papers fame recent published a book about his days as a nuclear war planner, The Doomsday Machine. Below are just a few of the bits I found interesting. (There were many others, but they were more difficult to excerpt.)

My first summer [at RAND] I worked seventy-hour weeks, devouring secret studies and analyses till late every night, to get up to speed on the problems and the possible solutions. I was looking for clues as to how we could frustrate the Soviet versions of RAND and SAC, and do it in time to avert a nuclear Pearl Harbor. Or postpone it. From the Air Force intelligence estimates I was newly privy to, and the dark view of the Soviets, which my colleagues shared with the whole national security community, I couldn’t believe that the world would long escape nuclear holocaust. Alain Enthoven and I were the youngest members of the department. Neither of us joined the extremely generous retirement plan RAND offered. Neither of us believed, in our late twenties, we had a chance of collecting on it.

Just one of many stories on how unreliable Ellsberg found command and control procedures to be:

To prevent unauthorized action by a single duty officer with access to Execute codes in any particular command post, there was a universal and supposedly ironclad rule that at least two such officers must be on duty at all times, day and night, and they must both be involved in, and agree on, the authentication of an order to execute nuclear war plans from a higher authority and on their decision to relay this order to subordinate commands… One way or another, each post purported to have arrangements so that one officer by himself could neither authenticate orders received nor send out authenticated Execute commands.

But in practice, not. As various duty officers explained to me, oftentimes only one man was on duty in the office. The personnel requirements for having two qualified officers sitting around in every such station at literally every moment of the night were just too stringent to be met. Duty rosters did provide for it, but not for backups when one officer “had” to be elsewhere—to get some food or for a medical emergency, his own or, on some bases, his wife’s. Did that mean that all subordinate commands would be paralyzed, unable to receive authenticated Execute orders, if the one remaining duty officer received what appeared to be an order to commence nuclear operations during that interval?

That couldn’t be permitted, in the eyes of the officers assigned to this duty, each of whom had faced up to the practical possibility of this situation. So each of them had provided for it “unofficially,” in his own mind or usually by agreement with his fellow duty officers. Each, in reality, had the combinations to both safes, after all, or some arrangement for acquiring them. If there was only one safe, each officer would, in reality, know the full combination to it. One officer would hold both envelopes when the other had to be away. Where there were more elaborate safeguards, the officers had always spent some of their idle hours late at night figuring out how to circumvent them, “if necessary.” They had always succeeded in doing so. I found this in every post I visited.

[Read more…]

Pinker on implementing world peace

From Better Angels of Our Nature, ch. 5:

In “Perpetual Peace,” Kant envisioned a “federation of free states” that would fall well short of an international Leviathan. It would be a gradually expanding club of liberal republics rather than a global megagovernment, and it would rely on the soft power of moral legitimacy rather than on a monopoly on the use of force. The modern equivalent is the intergovernmental organization or IGO — a bureaucracy with a limited mandate to coordinate the policies of participating nations in some area in which they have a common interest. The international entity with the best track record for implementing world peace is probably not the United Nations, but the European Coal and Steel Community, an IGO founded in 1950 by France, West Germany, Belgium, the Netherlands, and Italy to oversee a common market and regulate the production of the two most important strategic commodities. The organization was specifically designed as a mechanism for submerging historic rivalries and ambitions — especially West Germany’s — in a shared commercial enterprise. The Coal and Steel Community set the stage for the European Economic Community, which in turn begot the European Union.

Many historians believe that these organizations helped keep war out of the collective consciousness of Western Europe. By making national borders porous to people, money, goods, and ideas, they weakened the temptation of nations to fall into militant rivalries, just as the existence of the United States weakens any temptation of, say, Minnesota and Wisconsin to fall into a militant rivalry. By throwing nations into a club whose leaders had to socialize and work together, they enforced certain norms of cooperation. By serving as an impartial judge, they could mediate disputes among member nations. And by holding out the carrot of a vast market, they could entice applicants to give up their empires (in the case of Portugal) or to commit themselves to liberal democracy (in the case of former Soviet satellites and, perhaps soon, Turkey).

Richard Clarke and R.P. Eddy on AI risk

Richard Clarke and R.P. Eddy recently published Warnings, a book in which they try to identify “those rare people who… have accurate visions of looming disasters.” The opening chapter explains the aims of the book:

…this book will seek to answer these questions: How can we detect a real Cassandra among the myriad of pundits? What methods, if any, can be employed to better identify and listen to these prophetic warnings? Is there perhaps a way to distill the direst predictions from the surrounding noise and focus our attention on them?

…As we proceeded through these Cassandra Event case studies in a variety of different fields, we began to notice common threads: characteristics of the Cassandras, of their audiences, and of the issues that, when applied to a modern controversial prediction of disaster, might suggest that we are seeing someone warning of a future Cassandra Event. By identifying those common elements and synthesizing them into a methodology, we create what we call our Cassandra Coefficient, a score that suggests to us the likelihood that an individual is indeed a Cassandra whose warning is likely accurate, but is at risk of being ignored.

Having established this process for developing a Cassandra Coefficient based on past Cassandra Events, we next listen for today’s Cassandras. Who now among us may be accurately warning us of something we are ignoring, perhaps at our own peril?

Of the risks covered in the book, Clarke says he’s most worried about sea level rise, and Eddy says he’s most worried about superintelligence.

Below is a sampling of what they say in the chapter on risks from advanced AI systems. Note that I’m merely quoting from their take, not necessarily agreeing with it. (Indeed, there are significant parts I disagree with.)

[Read more…]

Hillary Clinton on AI risk

From What Happened, p. 241:

Technologists like Elon Musk, Sam Altman, and Bill Gates, and physicists like Stephen Hawking have warned that artificial intelligence could one day pose an existential security threat. Musk has called it “the greatest risk we face as a civilization.” Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about “the rise of the robots” in some Iowa town hall. Maybe I should have. In any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.

Update 11/24/2017: Clinton said more about AI fears in an interview with Hugh Hewitt:

Bill Gates, Elon Musk, Stephen Hawking, a lot of really smart people are sounding an alarm that we’re not hearing. And their alarm is artificial intelligence is not our friend. It can assist us in many ways if it is properly understood and contained. But we are racing headfirst into a new era of artificial intelligence that is going to have dramatic effects on how we live, how we think, how we relate to each other. You know, what are we going to do when we get driverless cars? It sounds like a great idea. And how many millions of people, truck drivers and parcel delivery people and cab drivers and even Uber drivers, what do we do with the millions of people who will no longer have a job? We are totally unprepared for that. What do we do when we are connected to the internet of things and everything we know and everything we say and everything we write is, you know, recorded somewhere? And it can be manipulated against us. So I, you know, one thing I wanted to do if I had been president was to have a kind of blue ribbon commission with people from all kinds of expertise coming together to say what should America’s policy on artificial intelligence be?

But of course, the worries Gates & Musk & Hawking have expressed are not about self-driving cars.

Monkey classification errors

More Wynne & Udell (2013):

Michael D’Amato and Paul van Sant (1988) trained Cebus apella monkeys to discriminate slides containing people from those that did not. The monkeys readily learned to do this. Then the monkeys were presented with novel slides they had never seen before which contained either scenes with people or similar scenes with no people in them. Here also the monkeys spontaneously classified the majority of slides correctly. So far, so good – clear evidence that the monkeys had not just learned the particular slides they had been trained on but had abstracted a person concept from those slides that they then successfully applied to pictures they had never seen before.

Or had they? D’Amato and van Sant did not stop their analysis simply with the observation that the monkeys had successfully transferred their learning to novel slides – rather they went on to look carefully at the kinds of errors the monkeys had made. Although largely successful with the novel slides, the monkeys made some very puzzling mistakes. For example, one of the person slides that the monkeys had failed to recognize as a picture of a human being had been a head and shoulders portrait – which, to another human, is a classic image of a person. One of the slides that the monkeys had incorrectly classified as containing a human had actually been a shot of a jackal carrying a dead flamingo in its mouth; both the jackal and its prey were also reflected in the water beneath them. What person in her right mind could possible confuse a jackal with a flamingo in its mouth with another human being?

The explanation for both these mistakes is the same: the monkeys had generalized on the basis of the particular features contained in the slides they had been trained with rather than learning the more abstract concept that the experimenters had intended. The head and shoulders portrait of a person lacked the head-torso-arms-legs body shape that had been most common among the images that the monkeys had been trained with, and consequently, they had rejected it as not similar enough to the positive image they were looking for. Similarly, during training, the only slides that had contained flashes of red happened to be those of people. Three of the training slides had contained people wearing a piece of red clothing, whereas none of the nonperson slides had contained the color red. Consequently, when the jackal with prey slide came along during testing, it contained the color red, and so the monkeys classified it as a person slide.

Adversarial examples for pigeons

From Wynne & Udell (2013):

Michael Young and colleagues carried out experiments that add to a sense that the pigeon’s perception of pictures of objects is not identical to our own. They trained pigeons to peck in different locations on a computer-controlled touch screen, depending on which of four different objects was presented: an arch, a barrel, a brick, and a triangular wedge (Young et al., 2001). The objects were initially presented to the pigeons as images shaded to suggest light shining on them from one direction. Next, Young and colleagues tested the pigeons with pictures of the same objects, but this time illuminated from a different direction… To the experimenters’ surprise, the pigeons’ ability to recognize the objects was disturbed by changes in lighting that human observers were barely able to perceive… [see below]

pigeons study

How long does it take to identify, mitigate, and remediate a major problem?

Baiocchi & Welser (2010):

…we conducted a literature survey on each of the [problems comparable to the problem of space debris]. We then determined the length of time spent in each stage (problem identification, establishment of normative behaviors, mitigation, and remediation) based on research from periodical sources, legislative records, and court rulings… Finally, we inspected each timeline and made a judgment about the approximate year in which each problem entered a new stage… The result is shown in Figure 6.1, and it provides a notional comparison that shows how each of the problems progressed through the four stages.

Figure 6.1

Could be interesting to see this kind of analysis for a greater range of societal challenges, or sets of challenges chosen for how similar they are to a different target case. (The target case for this report was space debris.)

A utilitarian foundation?

The introduction to Jacobson (1984) makes it sound as though the John A. Hartford Foundation was roughly cause-neutral and utilitarian in its approach, at least for some of its history:

The 1958 annual report of the Hartford Foundation describes its starting point:

Neither John Hartford nor his brother George, in their bequests to the organization, expressed any wish as to how the funds they provided should be used… Our benefactors’ one common request was that the Foundation strive always to do the greatest good for the greatest number.

…If available funds are to be used effectively, it is necessary to carve from the whole vast spectrum of human needs one small band that the heart and mind together tell you is the area in which you can make your best contribution.

The first task of the Foundation was thus to define the greatest good. Basing its decision on the pattern of John Hartford’s previous giving, the Foundation chose to support biomedical, largely clinical, research. Between 1954 and 1979, the Hartford Foundation participated in some of the most important advances in modern medicine, supplied hospitals and medical centers with equipment that reflected those advances, provided for the training of a generation of researchers, saved countless lives, and involved itself deeply in the burgeoning of the current health care crisis. In that period, the Foundation spent close to $175 million [presumably this is 1984 dollars, i.e. $408 million in 2016 dollars].

…Many modern research-supporting institutions have chosen to bear the costs of close supervision and peer review in order to ensure the quality of projects supported either directly or indirectly by the public. But both the trustees and the staff of the Hartford Foundation came from a background that stressed minimizing administrative costs so as to maximize benefits to the public. During the Foundation’s first seven years as a leading source of funds for biomedical research, the full-time staff consisted of one person. To achieve quality control at low cost, the Foundation adopted a policy of hiring consultants as they were needed to review particular grant applications.

As a matter of policy, too, the Foundation tried to fund projects and types of research that could not obtain funding from other sources. For example, the Hartford Foundation was the first to pay for the patient-bed costs of clinical research. Filling this gap was clearly desirable. But the Foundation also supported some researchers whose theories or personalities inspired skepticism in their colleagues. These grants were calculated risks. Many of the projects thus supported were unsuccessful; a few have produced major advances in clinical medicine.

When these successes occurred, the Hartford Foundation could have chosen to publicize its role in them. But John and George Hartford disliked publicity. The trustees and staff made this family trait a matter of policy. They believed that being in the public eye was tasteless, a waste of time, and likely to produce an excess of grant requests unmanageable by a small staff. As a result, the pool of grant applicants was limited largely to those who heard about the Foundation by word of mouth — from past grantees or consultants.

Probably the truth is more complicated; I haven’t investigated the foundation’s history closely. Note also that the foundation seems to have cared a lot about the overhead ratio, whereas today’s effective altruists tend to think overhead ratio considerations should be subordinate to impact per dollar.

Have any of my readers heard of any other charitable foundations aspiring to be (roughly) cause-neutral and utilitarian in their approach?

Bill Koch, romancer

Pretty sure my friends’ nerdy-romantic messages are cleverer than Bill Koch’s:

[Bill Koch’s lover] referred to herself in a separate fax as a “wet orchid” who yearned for warm honey to be drizzled on her body. In another, she wrote: “My poor nerve endings are already hungry. You are creating such a wanton woman. I can feel those kisses, and every inch of my body misses you.”

Bill’s far-less-sensuous facsimiles displayed the MIT-trained engineer’s geeky side: “I cannot describe how much I look forward to seeing you again,” he wrote. “It is beyond calculation by the largest computers.” In another fax, he jotted an equation to express his devotion, ending with a hand-drawn heart and, within it, the mathematical symbol for infinity.

Friedman on economics chairs

Funny comment in a 1990 letter penned by Milton Friedman, quoted in Blundell (2007), p. 47:

I have personally been impressed by the extent to which the growing acceptability of free private-market ideas has produced a lowering of the average intellectual quality of those who espouse those ideas. This is inevitable, but I believe it has been fostered by… the creation of free-enterprise chairs of economics. I believe that they are counterproductive.

Scott Aaronson on order and chaos

Yup:

One of my first ideas was to write about the Second Law of Thermodynamics [in response to Edge.org’s Annual Question], and to muse about how one of humanity’s tragic flaws is to take for granted the gargantuan effort needed to create and maintain even little temporary pockets of order. Again and again, people imagine that, if their local pocket of order isn’t working how they want, then they should smash it to pieces, since while admittedly that might make things even worse, there’s also at least 50/50 odds that they’ll magically improve. In reasoning thus, people fail to appreciate just how exponentially more numerous are the paths downhill, into barbarism and chaos, than are the few paths further up. So thrashing about randomly, with no knowledge or understanding, is statistically certain to make things worse: on this point thermodynamics, common sense, and human history are all in total agreement. The implications of these musings for the present would be left as exercises for the reader.

Or, in cartoon form:

different

So apparently this is why we have positive psychology but not evidence-based psychological treatment

Here’s Marty Seligman, past president of the American Psychological Association (APA):1

APA presidents are supposed to have an initiative and… I thought mine could be “evidence-based treatment and prevention.” So I went to my friend, Steve Hyman, the director of [National Institute of Mental Health]. He was thrilled and told me he would chip in $40 million dollars if I could get APA working on evidence-based treatment.

So I told CAPP [which owns the APA] about my plan and about NIMH’s willingness. I felt the room get chillier and chillier. I rattled on. Finally, the chair of CAPP memorably said, “What if the evidence doesn’t come out in our favor?”

…I limped my way to [my friend’s] office for some fatherly advice.

“Marty,” he opined, “you are trying to be a transactional president. But you cannot out-transact these people…”

And so I proposed that Psychology turn its… attention away from pathology and victimology and more toward what makes life worth living: positive emotion, positive character, and positive institutions. I never looked back and this became my mission for the next fifteen years. The endeavor… caught on.

My post title is sort-of joking. Others have pushed on evidence-based psychology while Seligman focused on positive psychology, and Seligman certainly wouldn’t say that we “don’t have” evidence-based psychological treatment. But I do maintain that evidence-based psychology is not yet as well-developed as evidence-based medicine, even given EBM’s many problems.

  1. From his chapter in Sternberg et al. (2016). []

Karpathy on nukes

OpenAI deep learning researcher Andrej Karpathy on Rhodes’ Making of the Atomic Bomb:

Unfortunately, we live in a universe where the laws of physics feature a strong asymmetry in how difficult it is to create and to destroy. This observation is also not reserved to nuclear weapons – more generally, technology monotonically increases the possible destructive damage per person per dollar. This is my favorite resolution to the Fermi paradox.

As I am a scientist myself, I was particularly curious about the extent to which the nuclear scientists who conceived and designed the bomb influenced the ethical/political discussions. Unfortunately, it is clearly the case that the scientists were quickly marginalized and, in effect, told to shut up and just help build the bomb. From the very start, Roosevelt explicitly wanted policy considerations restricted to a small group that excluded any scientists. As some of the more prominent examples of scientists trying to influence policy, Bohr advocated for establishing an “Open World Consortium” and sharing information about the bomb with the Soviet Union, but this idea was promptly shut down by Churchill. In this case it’s not clear what effect it would have had and, in any case, the Soviets already knew a lot through espionage. Bohr also held the seemingly naive notion that scientists should continue publishing all nuclear research during the second world war as he felt that science should be completely open and rise above national disputes. Szilard strongly opposed this openness internationally, but advocated for more openness within the Manhattan project for sake of efficiency. This outraged Groves who was obsessed with secrecy. In fact, Szilard was almost arrested, suspected to be a spy, and placed under a comical surveillance that mostly uncovered his frequent visits to a chocolate store.

Henry Kissinger on smarter-than-human AI

Henry Kissinger, speaking with The Economist:1

It is undoubtedly the case that modern technology poses challenges to world order and world order stability that are absolutely unprecedented. Climate change is one of them. I personally believe that artificial intelligence is a crucial one, lest we wind up… creating instruments in relation to which we are like the Incas to the Spanish, [such that] our own creations have a better capacity to calculate than we do. It’s a problem we need to understand on a global basis.

For reference, here is Wikipedia on the Spanish conquest of the Inca empire.

Henry Kissinger also addressed artificial intelligence in a recent interview with The Atlantic, though in this case he probably was not referring to smarter-than-human AI:

A military conflict between [China and the USA], given the technologies they possess, would be calamitous. Such a conflict would force the world to divide itself. And it would end in destruction, but not necessarily in victory, which would likely prove too difficult to define. Even if we could define victory, what in the wake of utter destruction could the victor demand of the loser? I am speaking of not merely the force of our weapons, but the unknowability of the consequences of some of them, such as cyberweapons. Traditional arms-control negotiations necessitated that each side tell the other what its capabilities were as a prelude to limiting those capacities. Yet with cyber, each country will be extremely reluctant to let others know its capabilities. Thus, there is no self-evident negotiated way to contain cyberwarfare. And artificial intelligence compounds this problem. Machines that can learn from their own experience and communicate with one another on their own raise both a practical and a moral imperative to find a way to keep mankind from destroying itself. The United States and China must strive to come to an understanding about the nature of their co-evolution.

  1. See here, from 25:10-25:50. []

How German nuclear scientists reacted to the news of Hiroshima

As part of Operation Epsilon, captured German nuclear physicists were secretly recorded at Farm Hall, a house in England where they were interned. Here’s how the German scientists reacted to the news (on August 6th, 1945) that an atomic bomb had been dropped on Hiroshima, taken from the now-declassified transcripts (pp. 116-122 of this copy):

Otto Hahn (co-discoverer of nuclear fission): I don’t believe it… They are 50 years further advanced than we.

Werner Heisenberg (leading figure of the German atomic bomb effort): I don’t believe a word of the whole thing. They must have spent the whole of their £500,000,000 in separating isotopes: and then it is possible.

In a margin note, the editor points out: “Heisenberg’s figure of £500 million is accurate. At the then-official exchange rate it is equal to $2 billion. President Truman’s account of the expense, released on August 6, stated: ‘We spent $2,000,000,000 on the greatest scientific gamble in history — and won.’ …Isotope separation accounted for a large share but by no means the whole of that…”

Hahn: I didn’t think it would be possible for another 20 years.

Karl Wirtz (head of reactor construction at a German physics institute): I’m glad we didn’t have it.

Carl Friedrich von Weizsäcker (theoretical physicist): I think it is dreadful of the Americans to have done it. I think it is madness on their part.

Heisenberg: One can’t say that. One could equally well say “That’s the quickest way of ending the war.”

Hahn: That’s what consoles me.

Heisenberg: I don’t believe a word about the bomb but I may be wrong…

Hahn: Once I wanted to suggest that all uranium should be sunk to the bottom of the ocean. I always thought that one could only make a bomb of such a size that a whole province would be blown up.

Weizsäcker: How many people were working on V1 and V2?

Kurt Diebner (physicist and organizer of the German Army’s fission project): Thousands worked on that.

Heisenberg: We wouldn’t have had the moral courage to recommend to the government in the spring of 1942 that they should employ 120,000 men just for building the thing up.

Weizsäcker: I believe the reason we didn’t do it was because all the physicists didn’t want to do it, on principle. If we had all wanted Germany to win the war we would have succeeded.

Hahn: I don’t believe that but I am thankful we didn’t succeed.

There is much more of interest in these transcripts. It is fascinating to eavesdrop on leading scientists’ unfiltered comments as they realize how badly their team was beaten to the finish line, and that the whole world has stepped from one era into another.

Hanson on intelligence explosion, from Age of Em

Economist Robin Hanson is among the most informed critics of the plausibility of what he calls a “local” intelligence explosion. He’s written on the topic many times before (most of it collected here), but here’s one more take from him on it, from Age of Em:

…some people foresee a rapid local “intelligence explosion” happening soon after a smart AI system can usefully modify its own mental architecture…

In a prototypical local explosion scenario, a single AI system with a supporting small team starts with resources that are tiny on a global scale. This team finds and then applies a big innovation in AI software architecture to its AI system, which allows this team plus AI combination to quickly find several related innovations. Together this innovation set allows this AI to quickly become more effective than the entire rest of the world put together at key tasks of theft or innovation.

That is, even though an entire world economy outside of this team, including other AIs, works to innovate, steal, and protect itself from theft, this one small AI team becomes vastly better at some combination of (1) stealing resources from others, and (2) innovating to make this AI “smarter,” in the sense of being better able to do a wide range of mental tasks given fixed resources. As a result of being better at these things, this AI quickly grows the resources that it controls and becomes more powerful than the entire rest of the world economy put together, and so it takes over the world. And all this happens within a space of days to months.

Advocates of this explosion scenario believe that there exists an as-yet-undiscovered but very powerful architectural innovation set for AI system design, a set that one team could find first and then keep secret from others for long enough. In support of this belief, advocates point out that humans (1) can do many mental tasks, (2) beat out other primates, (3) have a common IQ factor explaining correlated abilities across tasks, and (4) display many reasoning biases. Advocates also often assume that innovation is vastly underfunded today, that most economic progress comes from basic research progress produced by a few key geniuses, and that the modest wage gains that smarter people earn today vastly underestimate their productivity in key tasks of theft and AI innovation. In support, advocates often point to familiar myths of geniuses revolutionizing research areas and weapons.

Honestly, to me this local intelligence explosion scenario looks suspiciously like a super-villain comic book plot. A flash of insight by a lone genius lets him create a genius AI. Hidden in its super-villain research lab lair, this genius villain AI works out unprecedented revolutions in AI design, turns itself into a super-genius, which then invents super-weapons and takes over the world. Bwa-ha-ha.

Many arguments suggest that this scenario is unlikely (Hanson and Yudkowsky 2013). Specifically, (1) in 60 years of AI research high-level architecture has only mattered modestly for system performance, (2) new AI architecture proposals are increasingly rare, (3) algorithm progress seems driven by hardware progress (Grace 2013), (4) brains seem like ecosystems, bacteria, cities, and economies in being very complex systems where architecture matters less than a mass of capable detail, (5) human and primate brains seem to differ only modestly, (6) the human primate difference initially only allowed faster innovation, not better performance directly, (7) humans seem to have beat other primates mainly via culture sharing, which has a plausible threshold effect and so doesn’t need much brain difference, (8) humans are bad at most mental tasks irrelevant for our ancestors, (9) many human “biases” are useful adaptations to social complexity, (10) human brain structure and task performance suggest that many distinct modules contribute on each task, explaining a common IQ factor (Hampshire et al. 2012), (11) we expect very smart AI to still display many biases, (12) research today may be underfunded, but not vastly so (Alston et al. 2011; Ulku 2004), (13) most economic progress does not come from basic research, (14) most research progress does not come from a few geniuses, and (15) intelligence is not vastly more productive for research than for other tasks.

(And yes, the entire book is roughly this succinct and dense with ideas.)