How much recent investment in AI?

Stuart Russell:

Industry [has probably invested] more in the last 5 years than governments have invested since the beginning of the field [in the 1950s].

My guess is that Russell doesn’t have a source for this, and this is just his guess based on his history in the field and his knowledge of what’s been happening lately. But it might very well be true; I’m not sure.

Also see How Big is the Field of Artificial Intelligence?

New Stephen Hawking talk on the future of AI

At Google Zeigeist, Hawking said:

Computers are likely to overtake humans in intelligence at some point in the next hundred years. When that happens, we will need to ensure that the computers have goals aligned with ours.

It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.

Artificial intelligence research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In the near term, world militaries are considering starting an arms race in autonomous-weapon systems that can choose and eliminate their own targets, while the U.N. is debating a treaty banning such weapons. Autonomous-weapon proponents usually forget to ask the most important question: What is the likely endpoint of an arms race, and is that desirable for the human race? Do we really want cheap AI weapons to become the Kalashnikovs of tomorrow, sold to criminals and terrorists on the black market? Given concerns about long-term controllability of ever-more-advanced AI systems, should we arm them, and turn over our defense to them? In 2010, computerized trading systems created the stock market “flash crash.” What would a computer-triggered crash look like in the defense arena? The best time to stop the autonomous-weapons arms race is now.

In the medium-term, AI may automate our jobs, to bring both great prosperity and equality.

Looking further ahead, there are no fundamental limits to what can be achieved. There is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains.

An explosive transition is possible, although it may play out differently than in the movies. As Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a singularity. One can imagine such technology out-smarting financial markets, out-inventing human researchers, out-manipulating human leaders, and potentially subduing us with weapons we cannot even understand.

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

In short, the advent of superintelligent AI would be either the best or the worst thing ever to happen to humanity, so we should plan ahead. If a superior alien civilization sent us a text message saying “We’ll arrive in a few decades,” would we just reply, “OK. Call us when you get here. We’ll leave the lights on.” Probably not, but this is more or less what has happened with AI.

Little serious research has been devoted to these issues, outside a few small nonprofit institutes. Fortunately, this is now changing. Technology pioneers Elon Musk, Bill Gates, and Steve Wozniak have echoed my concerns, and a healthy culture of risk assessment and awareness of societal implications is beginning to take root in the AI community. Many of the world’s leading AI researchers recently signed an open letter calling for the goal of AI to be redefined from simply creating raw, undirected intelligence to creating intelligence directed at benefiting humanity.

The Future of Life Institute, where I serve on the scientific advisory board, has just launched a global research program aimed at keeping AI beneficial.

When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technology such as nuclear weapons, synthetic biology, and strong artificial intelligence, we should instead plan ahead, and aim to get things right the first time, because it may be the only chance we will get.

I’m an optimist, and don’t believe in boundaries, neither for what we can do in our personal lives, nor for what life and intelligence can accomplish in our universe. This means that the brief history of intelligence that I have told you about is not the end of the story, but just the beginning of what I hope will be billions of years of life flourishing in the cosmos.

Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.

Cleverness over content

Norman Finkelstein:

Yeah there’s definitely a place for style and creativity… The problem is when… English majors decide they want to do politics and they have no background in the field of inquiry…

…most people don’t [have expertise] and what you have now is versions of George Packer, Paul Berman. There’s just a large number of people who know nothing about politics, don’t even think it’s important to do the research side. They simply substitute the clever turn of phrase. The main exemplar of that in recent times was Christopher Hitchens who really hadn’t a clue what he was talking about. But what he would do is come up with three arcane facts, and with these three arcane facts he would weave a long essay. So people say, oh look at that. They would react in wonder at one or the other pieces of arcana and then take him for a person who is knowledgable.

People unfortunately don’t care very much about content. They care about cleverness. That’s the basis on which The New York Review of Books recruits its authors, you have to be as they say, a good writer. And the same thing with The New Yorker. Now obviously there’s a great virtue to being a good writer, but not when it’s a substitute for content.

Morris on the great divergence

From Why the West Rules — For Now, ch. 9, on the scientific revolution starting in 17th century Europe:

…contrary to what most of the ancients said, nature was not a living, breathing organism, with desires and intentions. It was actually mechanical. In fact, it was very like a clock. God was a clockmaker, switching on the interlocking gears that made nature run and then stepping back. And if that was so, then humans should be able to disentangle nature’s workings as easily as those of any other mechanism…

…This clockwork model of nature—plus some fiendishly clever experimenting and reasoning—had extraordinary payoffs. Secrets hidden since the dawn of time were abruptly, startlingly, revealed. Air, it turned out, was a substance, not an absence; the heart pumped blood around the body, like a water bellows; and, most bewilderingly, Earth was not the center of the universe.

Simultaneously, in 17th century China:

[A man named Gu Yanwu] turned his back on the metaphysical nitpicking that had dominated intellectual life since the twelfth century and, like Francis Bacon in England, tried instead to understand the world by observing the physical things that real people actually did.

For nearly forty years Gu traveled, filling notebooks with detailed descriptions of farming, mining, and banking. He became famous and others copied him, particularly doctors who had been horrified by their impotence in the face of the epidemics of the 1640s. Collecting case histories of actual sick people, they insisted on testing theories against real results. By the 1690s even the emperor was proclaiming the advantages of “studying the root of a problem, discussing it with ordinary people, and then having it solved.”

Eighteenth-century intellectuals called this approach kaozheng, “evidential research.” It emphasized facts over speculation, bringing methodical, rigorous approaches to fields as diverse as mathematics, astronomy, geography, linguistics, and history, and consistently developing rules for assessing evidence. Kaozheng paralleled western Europe’s scientific revolution in every way—except one: it did not develop a mechanical model of nature.

Like Westerners, Eastern scholars were often disappointed in the learning they had inherited from the last time social development approached the hard ceiling around forty-three points on the index (in their case under the Song dynasty in the eleventh and twelfth centuries). But instead of rejecting its basic premise of a universe motivated by spirit (qi) and imagining instead one running like a machine, Easterners mostly chose to look back to still more venerable authorities, the texts of the ancient Han dynasty.

[Read more…]

Forager Violence and Detroit

Figure 2.1 of Foragers, Farmers, and Fossil Fuels:

Figure 2.1.

Why is Detroit the only city mentioned on a map otherwise dedicated to groups of hunter-gatherers? Is Ian Morris making a joke about Detroit being a neo-primitivist hellscape of poverty and violence?

No, of course not.

Figure 2.1. is just a map of all the locations and social groups mentioned in chapter 2, and it just so happens that Detroit is the only city mentioned. Here’s the context:

Forager bands vary in their use of violence, as they vary in almost everything, but it took anthropologists a long time to realize how rough hunter-gatherers could be. This was not because the ethnographers all got lucky and visited peculiarly peaceful foraging folk, but because the social scale imposed by foraging is so small that even high rates of murder are difficult for outsiders to detect. If a band with a dozen members has a 10 percent rate of violent death, it will suffer roughly one homicide every twenty-five years; and since anthropologists rarely stay in the field for even twenty-five months, they will witness very few violent deaths. It was this demographic reality that led Elizabeth Marshall Thomas to title her sensitive 1959 ethnography of the !Kung The Gentle People — even though their murder rate was much the same as what Detroit would endure at the peak of its crack cocaine epidemic.

Okay, well… I guess that’s sort of like using Detroit as an example of a neo-primitivist hellscape of poverty and violence.

… and in case you think it’s mean of me to pick on Detroit, I’ll mention in my defense that I thought the first season of Silicon Valley was hilarious (I live in the Bay Area), and my girlfriend decided she couldn’t watch it because it was too painfully realistic.

Effective altruism as opportunity or obligation?

Is effective altruism (EA) an opportunity or an obligation? My sense is that Peter Singer, Oxford EAs, and Swiss EAs tend to think of EA as a moral obligation, while GiveWell and other Bay Area EAs are more likely to see EA as a (non-obligatory) exciting opportunity.

In the Harvard Political Review, Ross Rheingans-Yoo recently presented the “exciting opportunity” flavor of EA:

Effective altruism [for many] is an opportunity and a question (I can help! How and where am I needed?), not an obligation and an ideology (You are a monster unless you help this way!), and it certainly does not demand that you sacrifice your own happiness to utilitarian ends. It doesn’t ask anyone to “give until it hurts”; an important piece of living to help others is setting aside enough money to live comfortably (and happily) first, and not feeling bad about living on that.

I tend to think about EA from the “exciting opportunity” perspective, but I think it’s only fair to remember that there is another major school of thought on this, which does argue for EA as a moral obligation, ala Singer’s famous article “Famine, Affluence, and Morality.”

Musk and Gates on superintelligence and fast takeoff

Recently, Baidu CEO Robin Li interviewed Bill Gates and Elon Musk about a range of topics, including machine superintelligence. Here is a transcript of that section of their conversation:

Li: I understand, Elon, that recently you said artificial intelligence advances are like summoning the demon. That generated a lot of hot debate. Baidu’s chief scientist Andrew Ng recently said… that worrying about the dark side of artificial intelligence is like worrying about overpopulation on Mars… He said it’s a distraction to those working on artificial intelligence.

Musk: I think that’s a radically inaccurate analogy, and I know a bit about Mars. The risks of digital superintelligence… and I want you to appreciate that it wouldn’t just be human-level, it would be superhuman almost immediately; it would just zip right past humans to be way beyond anything we could really imagine.

A more perfect analogy would be if you consider nuclear research, with its potential for a very dangerous weapon. Releasing the energy is easy; containing that energy safely is very difficult. And so I think the right emphasis for AI research is on AI safety. We should put vastly more effort into AI safety than we should into advancing AI in the first place. Because it may be good, or it may be bad. And it could be catastrophically bad if there could be the equivalent to a nuclear meltdown. So you really want to emphasize safety.

So I’m not against the advancement of AI… but I do think we should be extremely careful. And if that means that it takes a bit longer to develop AI, then I think that’s the right trail. We shouldn’t be rushing headlong into something we don’t understand.

Li: Bill, I know you share similar views with Elon, but is there any difference between you and him?

Gates: I don’t think so. I mean he actually put some money out to help get somewhere going on this, and I think that’s absolutely fantastic. For people in the audience who want to read about this, I highly recommend this Bostrom book called Superintelligence

We have a general purpose learning algorithm that evolution has endowed us with, and it’s running in an extremely slow computer. Very limited memory size, ability to send data to other computers, we have to use this funny mouth thing here… Whenever we build a new one it starts over and it doesn’t know how to walk. So believe me, as soon as this algorithm [points to head], taking experience and turning it into knowledge, which is so amazing and which we have not done in software, as soon as you do that, it’s not clear you’ll even know when you’re just at the human level. You’ll be at the superhuman level almost as soon as that algorithm is implanted, in silicon. And actually as time goes by that silicon piece is ready to be implanted, the amount of knowledge, as soon as it has that learning algorithm it just goes out on the internet and reads all the magazine and books… we have essentially been building the content based for the super intelligence.

So I try not to get too exercised about this but when people say it’s not a problem, then I really start to [shakes head] get to a point of disagreement. How can they not see what a huge challenge this is?

Michael Oppenheimer on how the IPCC got started

Interesting:

US support was probably critical to IPCC’s establishment. And why did the US government support it? Assistant Undersecretary of State Bill Nitze wrote to me a few years later saying that our group’s activities played a significant role. Among other motivations, the US government saw the creation of the IPCC as a way to prevent the activism stimulated by my colleagues and me from controlling the policy agenda.

I suspect that the Reagan Administration believed that, in contrast to our group, most scientists were not activists, and would take years to reach any conclusion on the magnitude of the threat. Even if they did, they probably would fail to express it in plain English. The US government must have been quite surprised when IPCC issued its first assessment at the end of 1990, stating clearly that human activity was likely to produce an unprecedented warming.

The IPCC’s first assessment laid the groundwork for negotiation of the UN Framework Convention on Climate Change (UNFCCC), signed at the Earth Summit in 1992. In a sense, the UNFCCC and its progeny, the Kyoto Protocol, were unintended consequences of the US support for establishment of IPCC – not what the Reagan Administration had in mind!

Modern classical music, summed up in one paragraph

From Burkholder et al.’s History of Western Music, 9th edition (pp. 938-939):

The demands on performers by composers like Berio and Carter [and] Babbitt, Stockhausen, and Boulez, were matched by their demands on listeners. Each piece was difficult to understand in its own right, using a novel musical language even more distant from the staples of the concert repertoire than earlier modernist music had been. Compounding listeners’ difficulties was that each composer and often each piece used a unique approach, so that even after getting to know one such work, encountering the next one could be like starting from scratch.

Harari on the great divergence

Harari again:

… if in 1770 Europeans had no significant technological advantage over Muslims, Indians and Chinese, how did they manage in the following century to open such a gap between themselves and the rest of the world?

… After all, the technology of the first industrial wave was relatively simple. Was it so hard for Chinese or Ottomans to engineer steam engines, manufacture machine guns and lay down railroads?

The world’s first commercial railroad opened for business in 1830, in Britain. By 1850, Western nations were criss-crossed by almost 40,000 kilometres of railroads – but in the whole of Asia, Africa and Latin America there were only 4,000 kilometres of tracks. In 1880, the West boasted more than 350,000 kilometres of railroads, whereas in the rest of the world there were but 35,000 kilometres of train lines (and most of these were laid by the British in India). The first railroad in China opened only in 1876. It was twenty-five kilometres long and built by Europeans – the Chinese government destroyed it the following year. In 1880 the Chinese Empire did not operate a single railroad. The first railroad in Persia was built only in 1888, and it connected Tehran with a Muslim holy site about ten kilometres south of the capital. It was constructed and operated by a Belgian company. In 1950, the total railway network of Persia still amounted to a meagre 2,500 kilometres, in a country seven times the size of Britain.

The Chinese and Persians did not lack technological inventions such as steam engines (which could be freely copied or bought). They lacked the values, myths, judicial apparatus and sociopolitical structures that took centuries to form and mature in the West and which could not be copied and internalised rapidly. France and the United States quickly followed in Britain’s footsteps because the French and Americans already shared the most important British myths and social structures. The Chinese and Persians could not catch up as quickly because they thought and organised their societies differently.

This explanation sheds new light on the period from 1500 to 1850. During this era Europe did not enjoy any obvious technological, political, military or economic advantage over the Asian powers, yet the continent built up a unique potential, whose importance suddenly became obvious around 1850. The apparent equality between Europe, China and the Muslim world in 1750 was a mirage. Imagine two builders, each busy constructing very tall towers. One builder uses wood and mud bricks, whereas the other uses steel and concrete. At first it seems that there is not much of a difference between the two methods, since both towers grow at a similar pace and reach a similar height. However, once a critical threshold is crossed, the wood and mud tower cannot stand the strain and collapses, whereas the steel and concrete tower grows storey by storey, as far as the eye can see.

What potential did Europe develop in the early modern period that enabled it to dominate the late modern world? There are two complementary answers to this question: modern science and capitalism. Europeans were used to thinking and behaving in a scientific and capitalist way even before they enjoyed any significant technological advantages. When the technological bonanza began, Europeans could harness it far better than anybody else.

Chapters 16 and 17 defend this thesis. I don’t quite agree with all of the above, nor do I agree entirely with his version of this thesis, but nevertheless it might be the best popular exposition of the thesis I’ve seen yet.

Harari on what is natural

Harari, Sapiens:

A good rule of thumb is ‘Biology enables, Culture forbids.’ Biology is willing to tolerate a very wide spectrum of possibilities. It’s culture that obliges people to realise some possibilities while forbidding others. Biology enables women to have children – some cultures oblige women to realise this possibility. Biology enables men to enjoy sex with one another – some cultures forbid them to realise this possibility.

Culture tends to argue that it forbids only that which is unnatural. But from a biological perspective, nothing is unnatural. Whatever is possible is by definition also natural. A truly unnatural behaviour, one that goes against the laws of nature, simply cannot exist, so it would need no prohibition. No culture has ever bothered to forbid men to photosynthesise, women to run faster than the speed of light, or negatively charged electrons to be attracted to each other.

In truth, our concepts ‘natural’ and unnatural’ are taken not from biology, but from Christian theology. The theological meaning of ‘natural’ is ‘in accordance with the intentions of the God who created nature’. Christian theologians argued that God created the human body, intending each limb and organ to serve a particular purpose. If we use our limbs and organs for the purpose envisioned by God, then it is a natural activity. To use them differently than God intends is unnatural. But evolution has no purpose. Organs have not evolved with a purpose, and the way they are used is in constant flux. There is not a single organ in the human body that only does the job its prototype did when it first appeared hundreds of millions of years ago. Organs evolve to perform a particular function, but once they exist, they can be adapted for other usages as well. Mouths, for example, appeared because the earliest multicellular organisms needed a way to take nutrients into their bodies. We still use our mouths for that purpose, but we also use them to kiss, speak and, if we are Rambo, to pull the pins out of hand grenades. Are any of these uses unnatural simply because our worm-like ancestors 600 million years ago didn’t do those things with their mouths?

 

Human progress before the industrial revolution

Despite its prominence in Malthus’s and Cook’s work, social scientists interested in long-term economic history regularly ignore the food/nonfood calories distinction and, focusing solely on food, conclude that between the invention of agriculture more than ten thousand years ago and the industrial revolution two hundred years ago not very much happened. In one of the most widely cited recent discussions, the economic historian Gregory Clark explicitly suggested that “the average person in the world of 1800 [CE] was no better off than the average person of 100,000 BC.”

But this is mistaken. As Malthus recognized, if good weather or advances in technology or organization raised food output, population did tend to expand to consume the surplus, forcing people to consume fewer and cheaper food calories; but despite the downward pressure on per capita food supply, increases in nonfood energy capture have, in the long run, steadily accumulated throughout Holocene times.

Ian Morris, The Measure of Civilization

The plight of shy male nerds in a feminist world

I never had it nearly this bad, but I suspect I know a lot of people who did.

Scott Aaronson on the plight of shy male nerds in a feminist world:

[Amy,] you write about tech conferences in which the men engage in “old-fashioned ass-grabbery.” You add: “some of the gropiest, most misogynistic guys I’ve met have been of the shy and nerdy persuasion … In fact I think a shy/nerdy-normed world would be a significantly worse world for women.”

If that’s been your experience, then I understand how it could reasonably have led you to your views. Of course, other women may have had different experiences.

You also say that men in STEM fields — unlike those in the humanities and social sciences — don’t even have the “requisite vocabulary” to discuss sex discrimination, since they haven’t read enough feminist literature. Here I can only speak for myself: I’ve read at least a dozen feminist books, of which my favorite was Andrea Dworkin’s Intercourse (I like howls of anguish much more than bureaucratic boilerplate, so in some sense, the more radical the feminist, the better I can relate). I check Feministing, and even radfem blogs like “I Blame the Patriarchy.” And yes, I’ve read many studies and task force reports about gender bias, and about the “privilege” and “entitlement” of the nerdy males that’s keeping women away from science.

Alas, as much as I try to understand other people’s perspectives, the first reference to my “male privilege” — my privilege! — is approximately where I get off the train, because it’s so alien to my actual lived experience.

But I suspect the thought that being a nerdy male might not make me “privileged” — that it might even have put me into one of society’s least privileged classes — is completely alien to your way of seeing things. To have any hope of bridging the gargantuan chasm between us, I’m going to have to reveal something about my life, and it’s going to be embarrassing.

(sigh) Here’s the thing: I spent my formative years — basically, from the age of 12 until my mid-20s — feeling not “entitled,” not “privileged,” but terrified. I was terrified that one of my female classmates would somehow find out that I sexually desired her, and that the instant she did, I would be scorned, laughed at, called a creep and a weirdo, maybe even expelled from school or sent to prison. You can call that my personal psychological problem if you want, but it was strongly reinforced by everything I picked up from my environment: to take one example, the sexual-assault prevention workshops we had to attend regularly as undergrads, with their endless lists of all the forms of human interaction that “might be” sexual harassment or assault, and their refusal, ever, to specify anything that definitely wouldn’t be sexual harassment or assault. I left each of those workshops with enough fresh paranoia and self-hatred to last me through another year.

My recurring fantasy, through this period, was to have been born a woman, or a gay man, or best of all, completely asexual, so that I could simply devote my life to math, like my hero Paul Erdös did. Anything, really, other than the curse of having been born a heterosexual male, which for me, meant being consumed by desires that one couldn’t act on or even admit without running the risk of becoming an objectifier or a stalker or a harasser or some other creature of the darkness.

Of course, I was smart enough to realize that maybe this was silly, maybe I was overanalyzing things. So I scoured the feminist literature for any statement to the effect that my fears were as silly as I hoped they were. But I didn’t find any. On the contrary: I found reams of text about how even the most ordinary male/female interactions are filled with “microaggressions,” and how even the most “enlightened” males — especially the most “enlightened” males, in fact — are filled with hidden entitlement and privilege and a propensity to sexual violence that could burst forth at any moment.

Because of my fears—my fears of being “outed” as a nerdy heterosexual male, and therefore as a potential creep or sex criminal—I had constant suicidal thoughts. As Bertrand Russell wrote of his own adolescence: “I was put off from suicide only by the desire to learn more mathematics.”

At one point, I actually begged a psychiatrist to prescribe drugs that would chemically castrate me (I had researched which ones), because a life of mathematical asceticism was the only future that I could imagine for myself. The psychiatrist refused to prescribe them, but he also couldn’t suggest any alternative: my case genuinely stumped him. As well it might—for in some sense, there was nothing “wrong” with me. In a different social context—for example, that of my great-grandparents in the shtetl—I would have gotten married at an early age and been completely fine. (And after a decade of being coy about it, I suppose I’ve finally revealed the meaning of this blog’s title.)

All this time, I faced constant reminders that the males who didn’t spend months reading and reflecting about feminism and their own shortcomings—even the ones who went to the opposite extreme, who engaged in what you called “good old-fashioned ass-grabbery” — actually had success that way. The same girls who I was terrified would pepper-spray me and call the police if I looked in their direction, often responded to the crudest advances of the most Neanderthal of men by accepting those advances. Yet it was I, the nerd, and not the Neanderthals, who needed to check his privilege and examine his hidden entitlement!

So what happened to break me out of this death-spiral? Did I have an epiphany, where I realized that despite all appearances, it was I, the terrified nerd, who was wallowing in unearned male privilege, while those Neaderthal ass-grabbers were actually, on some deeper level, the compassionate feminists—and therefore, that both of us deserved everything we got?

No, there was no such revelation. All that happened was that I got older, and after years of hard work, I achieved some success in science, and that success boosted my self-confidence (at least now I had something worth living for), and the newfound confidence, besides making me more attractive, also made me able to (for example) ask a woman out, despite not being totally certain that my doing so would pass muster with a committee of radfems chaired by Andrea Dworkin—a prospect that was previously unthinkable to me. This, to my mind, “defiance” of feminism is the main reason why I was able to enjoy a few years of a normal, active dating life, which then led to meeting the woman who I married.

Now, the whole time I was struggling with this, I was also fighting a second battle: to maintain the liberal, enlightened, feminist ideals that I had held since childhood, against a powerful current pulling me away from them. I reminded myself, every day, that no, there’s no conspiracy to make the world a hell for shy male nerds. There are only individual women and men trying to play the cards they’re dealt, and the confluence of their interests sometimes leads to crappy outcomes. No woman “owes” male nerds anything; no woman deserves blame if she prefers the Neanderthals; everyone’s free choice demands respect.

That I managed to climb out of the pit with my feminist beliefs mostly intact, you might call a triumph of abstract reason over experience.

But I hope you now understand why I might feel “only” 97% on board with the program of feminism. I hope you understand why, despite my ironclad commitment to women’s reproductive choice and affirmative action and women’s rights in the developing world and getting girls excited about science, and despite my horror at rape and sexual assault and my compassion for the victims of those heinous crimes, I might react icily to the claim—for which I’ve seen not a shred of statistical evidence—that women are being kept out of science by the privileged, entitled culture of shy male nerds, which is worse than the culture of male doctors or male filmmakers or the males of any other profession. I believe you guys call this sort of thing “blaming the victim.” From my perspective, it serves only to shift blame from the Neanderthals and ass-grabbers onto some of society’s least privileged males, the ones who were themselves victims of bullying and derision, and who acquired enough toxic shame that way for appealing to their shame to be an effective way to manipulate their behavior. As I see it, whenever these nerdy males pull themselves out of the ditch the world has tossed them into, while still maintaining enlightened liberal beliefs, including in the inviolable rights of every woman and man, they don’t deserve blame for whatever feminist shortcomings they might still have. They deserve medals at the White House.

And no, I’m not even suggesting to equate the ~15 years of crippling, life-destroying anxiety I went through with the trauma of a sexual assault victim. The two are incomparable; they’re horrible in different ways. But let me draw your attention to one difference: the number of academics who study problems like the one I had is approximately zero. There are no task forces devoted to it, no campus rallies in support of the sufferers, no therapists or activists to tell you that you’re not alone or it isn’t your fault. There are only therapists and activists to deliver the opposite message: that you are alone and it is your privileged, entitled, male fault.

And with that, I guess I’ve laid my life bare to (along with all my other readers) a total stranger on the Internet who hasn’t even given her full name. That’s how much I care about refuting the implied charge of being a misogynistic pig; that’s how deeply it cuts.

Distinguishing Copenhagen and Many Worlds via experiment

Peter McCluskey pointed me to a nice explanation by Brian Greene of an experiment that could theoretically distinguish the Copenhagen and Many Worlds interpretations of quantum mechanics. This is from The Hidden Reality, ch. 8, endnote 12:

Here is a concrete in-principle experiment for distinguishing the Copenhagen and Many Worlds approaches. An electron, like all other elementary particles, has a property known as spin. Somewhat as a top can spin about an axis, an electron can too, with one significant difference being that the rate of this spin—regardless of the direction of the axis—is always the same. It is an intrinsic property of the electron, like its mass or its electrical charge. The only variable is whether the spin is clockwise or counterclockwise about a given axis. If it is counterclockwise, we say the electron’s spin about that axis is up; if it is clockwise, we say the electron’s spin is down. Because of quantum mechanical uncertainty, if the electron’s spin about a given axis is definite—say, with 100 percent certainty its spin is up about the z-axis—then its spin about the x- or y-axis is uncertain: about the x-axis the spin would be 50 percent up and 50 percent down; and similarly for the y-axis.

Imagine, then, starting with an electron whose spin about the z-axis is 100 percent up and then measuring its spin about the x-axis. According to the Copenhagen approach, if you find spin-down, that means the probability wave for the electron’s spin has collapsed: the spin-up possibility has been erased from reality, leaving the sole spike at spin-down. In the Many Worlds approach, by contrast, both the spin-up and spin-down outcomes occur, so, in particular, the spin-up possibility survives fully intact.

To adjudicate between these two pictures, imagine the following. After you measure the electron’s spin about the x-axis, have someone fully reverse the physical evolution. (The fundamental equations of physics, including that of Schrödinger, are time-reversal invariant, which means, in particular, that, at least in principle, any evolution can be undone. See The Fabric of the Cosmos for an in-depth discussion of this point.) Such reversal would be applied to everything: the electron, the equipment, and anything else that’s part of the experiment. Now, if the Many Worlds approach is correct, a subsequent measurement of the electron’s spin about the z-axis should yield, with 100 percent certainty, the value with which we began: spin-up. However, if the Copenhagen approach is correct (by which I mean a mathematically coherent version of it, such as the Ghirardi-Rimini-Weber formulation), we would find a different answer. Copenhagen says that upon measurement of the electron’s spin about the x-axis, in which we found spin-down, the spin-up possibility was annihilated. It was wiped off reality’s ledger. And so, upon reversing the measurement we don’t get back to our starting point because we’ve permanently lost part of the probability wave. Upon subsequent measurement of the electron’s spin about the z-axis, then, there is not 100 percent certainty that we will get the same answer we started with. Instead, it turns out that there’s a 50 percent chance that we will and a 50 percent chance that we won’t. If you were to undertake this experiment repeatedly, and if the Copenhagen approach is correct, on average, half the time you would not recover the same answer you initially did for the electron’s spin about the z-axis. The challenge, of course, is in carrying out the full reversal of a physical evolution. But, in principle, this is an experiment that would provide insight into which of the two theories is correct.

I’m not a physicist, and I don’t know whether this account is correct. Does anyone dispute it?

Further references on the subject are at Wikipedia.

In any case, such an experiment seems far beyond our reach. But since I’m Bayesian rather than Popperian, I put substantially more probability mass on MWI than Copenhagen even in the absence of definitive experiment. 😉

Lessons from Poor Economics

Poor Economics is the best book I’ve read on poverty reduction. The book ends with a summary of its key lessons. Here they are:

As this book has shown, although we have no magic bullets to eradicate poverty, no one-shot cure-all, we do know a number of things about how to improve the lives of the poor. In particular, five key lessons emerge.

First, the poor often lack critical pieces of information and believe things that are not true. They are unsure about the benefits of immunizing children; they think there is little value in what is learned during the first few years of education; they don’t know how much fertilizer they need to use; they don’t know which is the easiest way to get infected with HIV; they don’t know what their politicians do when in office. When their firmly held beliefs turn out to be incorrect, they end up making the wrong decision, sometimes with drastic consequences — think of the girls who have unprotected sex with older men or the farmers who use twice as much fertilizer as they should. Even when they know that they don’t know, the resulting uncertainty can be damaging. For example, the uncertainty about the benefits of immunization combines with the universal tendency to procrastinate, with the result that a lot of children don’t get immunized. Citizens who vote in the dark are more likely to vote for someone of their ethnic group, at the cost of increasing bigotry and corruption.

We saw many instances in which a simple piece of information makes a big difference. However, not every information campaign is effective. It seems that in order to work, an information campaign must have several features: It must say something that people don’t already know (general exhortations like “No sex before marriage” seem to be less effective); it must do so in an attractive and simple way (a film, a play, a TV show, a well-designed report card); and it must come from a credible source (interestingly, the press seems to be viewed as credible). One of the corollaries of this view is that governments pay a huge cost in terms of lost credibility when they say things that are misleading, confusing, or false.

Second, the poor bear responsibility for too many aspects of their lives. The richer you are, the more the “right” decisions are made for you. The poor have no piped water, and therefore do not benefit from the chlorine that the city government puts into the water supply. If they want clean drinking water, they have to purify it themselves. They cannot afford ready-made fortified breakfast cereals and therefore have to make sure that they and their children get enough nutrients. They have no automatic way to save, such as a retirement plan or a contribution to Social Security, so they have to find a way to make sure that they save. These decisions are difficult for everyone because they require some thinking now or some other small cost today, and the benefits are usually reaped in the distant future. As such, procrastination very easily gets in the way. For the poor, this is compounded by the fact that their lives are already much more demanding than ours: Many of them run small businesses in highly competitive industries; most of the rest work as casual laborers and need to constantly worry about where their next job will come from. This means that their lives could be significantly improved by making it as easy as possible to do the right thing — based on everything else we know — using the power of default options and small nudges: Salt fortified with iron and iodine could be made cheap enough that everyone buys it. Savings accounts, the kind that make it easy to put in money and somewhat costlier to take it out, can be made easily available to everyone, if need be, by subsidizing the cost for the bank that offers them. Chlorine could be made available next to every source where piping water is too expensive. There are many similar examples.

Third, there are good reasons that some markets are missing for the poor, or that the poor face unfavorable prices in them. The poor get a negative interest rate from their savings accounts (if they are lucky enough to have an account) and pay exorbitant rates on their loans (if they can get one) because handling even a small quantity of money entails a fixed cost. The market for health insurance for the poor has not developed, despite the devastating effects of serious health problems in their lives because the limited insurance options that can be sustained in the market (catastrophic health insurance, formulaic weather insurance) are not what the poor want.

In some cases, a technological or an institutional innovation may allow a market to develop where it was missing. This happened in the case of microcredit, which made small loans at more affordable rates available to millions of poor people, although perhaps not the poorest. Electronic money transfer systems (using cell phones and the like) and unique identification for individuals may radically cut the cost of providing savings and remittance services to the poor over the next few years. But we also have to recognize that in some cases, the conditions for a market to emerge on its own are simply not there. In such cases, governments should step in to support the market to provide the necessary conditions, or failing that, consider providing the service themselves.

We should recognize that this may entail giving away goods or services (such as bed nets or visits to a preventive care center) for free or even rewarding people, strange as it might sound, for doing things that are good for them. The mistrust of free distribution of goods and services among various experts has probably gone too far, even from a pure cost-benefit point of view. It often ends up being cheaper, per person served, to distribute a service for free than to try to extract a nominal fee. In some cases, it may involve ensuring that the price of a product sold by the market is attractive enough to allow the market to develop. For example, governments could subsidize insurance premiums, or distribute vouchers that parents can take to any school, private or public, or force banks to offer free “no frills” savings accounts to everyone for a nominal fee. It is important to keep in mind that these subsidized markets need to be carefully regulated to ensure they function well. For example, school vouchers work well when all parents have a way of figuring out the right school for their child; otherwise, they can turn into a way of giving even more of an advantage to savvy parents.

Fourth, poor countries are not doomed to failure because they are poor, or because they have had an unfortunate history. It is true that things often do not work in these countries: Programs intended to help the poor end up in the wrong hands, teachers teach desultorily or not at all, roads weakened by theft of materials collapse under the weight of overburdened trucks, and so forth. But many of these failures have less to do with some grand conspiracy of the elites to maintain their hold on the economy and more to do with some avoidable flaw in the detailed design of policies, and the ubiquitous three Is: ignorance, ideology, and inertia. Nurses are expected to carry out jobs that no ordinary human being would be able to complete, and yet no one feels compelled to change their job description. The fad of the moment (be it dams, barefoot doctors, microcredit, or whatever) is turned into a policy without any attention to the reality within which it is supposed to function. We were once told by a senior government official in India that the village education committees always include the parent of the best student in the school and the parent of the worst student in the school. When we asked how they decided who were the best and worst children, given that there are no tests until fourth grade, she quickly changed subjects. And yet even these absurd rules, once in place, keep going out of sheer inertia.

The good news, if that is the right expression, is that it is possible to improve governance and policy without changing the existing social and political structures. There is tremendous scope for improvement even in “good” institutional environments, and some margin for action even in bad ones. A small revolution can be achieved by making sure that everyone is invited to village meetings; by monitoring government workers and holding them accountable for failures in performing their duties; by monitoring politicians at all levels and sharing this information with voters; and by making clear to users of public services what they should expect—what the exact health center hours are, how much money (or how many bags of rice) they are entitled to.

Finally, expectations about what people are able or unable to do all too often end up turning into self-fulfilling prophecies. Children give up on school when their teachers (and sometimes their parents) signal to them that they are not smart enough to master the curriculum; fruit sellers don’t make the effort to repay their debt because they expect that they will fall back into debt very quickly; nurses stop coming to work because nobody expects them to be there; politicians whom no one expects to perform have no incentive to try improving people’s lives. Changing expectations is not easy, but it is not impossible: After seeing a female pradhan in their village, villagers not only lost their prejudice against women politicians but even started thinking that their daughter might become one, too; teachers who are told that their job is simply to make sure that all the children can read can accomplish that task within the duration of a summer camp. Most important, the role of expectations means that success often feeds on itself. When a situation starts to improve, the improvement itself affects beliefs and behavior. This is one more reason one should not necessarily be afraid of handing things out (including cash) when needed to get a virtuous cycle started.

Stewart and Dawkins on the unintended consequences of powerful technologies

Richard Dawkins and Jon Stewart discussed existential risk on the Sept. 24, 2013 edition of The Daily Show. Here’s how it went down:

STEWART: Here’s my proposal… for the discussion tonight. Do you believe that the end of our civilization will be through religious strife or scientific advancement? What do you think in the long run will be more damaging to our prospects as a human race?

In reply, Dawkins said that Martin Rees (of CSER) thinks humanity has a 50% chance of surviving the 21st century, and one cause for such worry is that powerful technologies could get into the hands of religious fanatics. Stewart replied:

STEWART: … [But] isn’t there a strong probability that we are not necessarily in control of the unintended consequences of our scientific advancement?… Don’t you think it’s even more likely that we will create something [for which] the unintended consequence… is worldwide catastrophe?

DAWKINS: That is possible. It’s something we have to worry about… Science is the most powerful way to do whatever you want to do. If you want to do good, it’s the most powerful way to do good. If you want to do evil, it’s the most powerful way to do evil.

STEWART: … You have nuclear energy and you go this way and you can light the world, but you go this [other] way, and you can blow up the world. It seems like we always try [the blow up the world path] first.

DAWKINS: There is a suggestion that one of the reasons that we don’t detect extraterrestrial civilizations is that when a civilization reaches the point where it could broadcast radio waves that we could pick up, there’s only a brief window before it blows itself up… It takes many billions of years for evolution to reach the point where technology takes off, but once technology takes off, it’s then an eye-blink — by the standards of geological time — before…

STEWART: … It’s very easy to look at the dark side of fundamentalism… [but] sometimes I think we have to look at the dark side of achievement… because I believe the final words that man utters on this Earth will be: “It worked!” It’ll be an experiment that isn’t misused, but will be a rolling catastrophe.

DAWKINS: It’s a possibility, and I can’t deny it. I’m more optimistic than that.

STEWART: … [I think] curiosity killed the cat, and the cat never saw it coming… So how do we put the brakes on our ability to achieve, or our curiosity?

DAWKINS: I don’t think you can ever really stop the march of science in the sense of saying “You’re forbidden to exercise your natural curiosity in science.” You can certainly put the brakes on certain applications. You could stop manufacturing certain weapons. You could have… international agreements not to manufacture certain types of weapons…

Bostrom’s unfinished fable of the sparrows

SuperintelligenceNick Bostrom’s new book — Superintelligence: Paths, Dangers, Strategies — was published today in the UK by Oxford University Press. It opens with a fable about some sparrows and an owl:

It was the nest-building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away.

“We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!”

“Yes!” said another. “And we could use it to look after our elderly and our young.”

“It could give us advice and keep an eye out for the neighborhood cat,” added a third.

Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.”

The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs.

Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?”

Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.”

“There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus.

Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on. Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.

It is not known how the story ends, but the author dedicates this book to Scronkfinkle and his followers.

For some ideas of what Scronkfinkle and his friends can work on before the others return with an owl egg, see here and here.

The Antikythera Mechanism

From Murray’s Human Accomplishment:

The problem with the standard archaeological account of human accomplishment from [the ancient world] is not that the picture is incomplete (which is inevitable), but that the data available to us leave so many puzzles.

The Antikythera Mechanism is a case in point… The Antikythera Mechanism is a bronze device about the size of a brick. It was recovered in 1901 from the wreck of a trading vessel that had sunk near the southern tip of Greece sometime around –65. Upon examination, archaeologists were startled to discover imprints of gears in the corroded metal. So began a half-century of speculation about what purpose the device might have served.

Finally, in 1959, science historian Derek de Solla Price figured it out: the Antikythera Mechanism was a mechanical device for calculating the positions of the sun and moon. A few years later, improvements in archaeological technology led to gamma radiographs of the Mechanism, revealing 22 gears in four layers, capable of simulating several major solar and lunar cycles, including the 19-year Metonic cycle that brings the phases of the moon back to the same calendar date. What made this latter feat especially astonishing was not just that the Mechanism could reproduce the 235 lunations in the Metonic cycle, but that it used a differential gear to do so. Until then, it was thought that the differential gear had been invented in 1575.

See also Wikipedia.