Feinstein on the global arms trade

Some quotes from Feinstein’s The Shadow World: Inside the Global Arms Trade.

#1:

The £75m Airbus, painted in the colours of the [Saudi] Prince’s beloved Dallas Cowboys, was a gift from the British arms company BAE Systems. It was a token of gratitude for the Prince’s role, as son of the country’s Defence Minister, in the biggest arms deal the world has seen. The Al Yamamah – ‘the dove’ – deal signed between the United Kingdom and Saudi Arabia in 1985 was worth over £40bn. It was also arguably the most corrupt transaction in trading history. Over £1bn was paid into accounts controlled by Bandar. The Airbus – maintained and operated by BAE at least until 2007 – was a little extra, presented to Bandar on his birthday in 1988.

A significant portion of the more than £1bn was paid into personal and Saudi embassy accounts at the venerable Riggs Bank opposite the White House on Pennsylvania Avenue, Washington DC. The bank of choice for Presidents, ambassadors and embassies had close ties to the CIA, with several bank officers holding full agency security clearance. Jonathan Bush, uncle of the President, was a senior executive of the bank at the time. But Riggs and the White House were stunned by the revelation that from 1999 money had inadvertently flowed from the account of Prince Bandar’s wife to two of the fifteen Saudis among the 9/11 hijackers.

[Read more…]

Chomsky on Diaperology

An amusing excerpt from Chomsky’s Understanding Power (footnote also reproduced):

In the late 1940s, the United States just ran [the U.N.] completely — international relations of power were such that the U.S. just gave the orders and everybody followed, because the rest of the world was smashed up and starving after the Second World War. And at the time, everybody here [in the U.S.] loved the U.N., because it always went along with us: every way we told countries to vote, they voted. Actually, when I was a graduate student around 1950, major social scientists, people like Margaret Mead, were trying to explain why the Russians were always saying “no” at the U.N. — because here was the United States putting through these resolutions and everybody was voting “yes,” then the Russians would stand up and say “no.” So of course they went to the experts, the social scientists, to figure it out. And what they came up with was something we used to call “diaperology”; the conclusion was, the reason the Russians always say “no” at the U.N. is because they raise their infants with swaddling clothes… Literally — they raise their infants with swaddling clothes in Russia, so Russians end up very negative, and by the time they make it to the U.N. all they want to do is say “no” all the time. That was literally proposed, people took it seriously, there were articles in the journals about it, and so on.

Morris’ thesis in Foragers, Farmers, and Fossil Fuels

From the opening of chapter 5:

I suggested that modern human values initially emerged somewhere around 100,000 years ago (±50,000 years) as a consequence of the biological evolution of our big, fast brains, and that once we had our big, fast brains, cultural evolution became a possibility too. Because of cultural evolution, human values have mutated rapidly in the last twenty thousand years, and the pace of change has accelerated in the last two hundred years.

I identified three major stages in human values, which I linked to foraging, farming, and fossil-fuel societies. My main point was that in each case, modes of energy capture determined population size and density, which in turn largely determined which forms of social organization worked best, which went on to make certain sets of values more successful and attractive than others.

Foragers, I observed, overwhelmingly lived in small, low-density groups, and generally saw political and wealth hierarchies as bad things. They were more tolerant of gender hierarchy, and (by modern lights) surprisingly tolerant of violence. Farmers lived in bigger, denser communities, and generally saw steep political, wealth, and gender hierarchies as fine. They had much less patience than foragers, though, for interpersonal violence, and restricted its range of legitimate uses more narrowly. Fossil-fuel folk live in bigger, denser communities still. They tend to see political and gender hierarchy as bad things, and violence as particularly evil, but they are generally more tolerant of wealth hierarchies than foragers, although not so tolerant as farmers.

Pinker still confused about AI risk

Ack! Steven Pinker still thinks AI risk worries are worries about malevolent AI, despite multiple attempts to correct his misimpression:

John Lily: Silicon Valley techies are divided about whether to be fearful or dismissive of the idea of new super intelligent AI… How would you approach this issue?

Steven Pinker: …I think it’s a fallacy to conflate the ability to reason and solve problems with the desire to dominate and destroy, which sci-fi dystopias and robots-run-amok plots inevitably do. It’s a projection of evolved alpha-male psychology onto the concept of intelligence… So I don’t think that malevolent robotics is one of the world’s pressing problems.

Will someone please tell him to read… gosh, anything on the issue that isn’t a news story? He could also watch this talk by Stuart Russell if that’s preferable.

Minsky on AI risk in the 80s and 90s

Follow-up to: AI researchers on AI risk; Fredkin on AI risk in 1979.

Marvin Minsky is another AI scientist who has been thinking about AI risk for a long time, at least since the 1980s. Here he is in a 1983 afterword to Vinge’s novel True Names:

The ultimate risk comes when our greedy, lazy, masterminds are able at last to take that final step: to design goal-achieving programs which are programmed to make themselves grow increasingly powerful… It will be tempting to do this, not just for the gain in power, but just to decrease our own human effort in the consideration and formulation of our own desires. If some genie offered you three wishes, would not your first one be, “Tell me, please, what is it that I want to the most!” The problem is that, with such powerful machines, it would require but the slightest powerful accident of careless design for them to place their goals ahead of ours, perhaps the well-meaning purpose of protecting us from ourselves, as in With Folded Hands, by Jack Williamson; or to protect us from an unsuspected enemy, as in Colossus by D.H. Jones…

And according to Eric Drexler (2015), Minsky was making the now-standard “dangerous-to-humans resource acquisition is a natural subgoal of almost any final goal” argument at least as early as 1990:

My concerns regarding AI risk, which center on the challenges of long-term AI governance, date from the inception of my studies of advanced molecular technologies, ca. 1977. I recall a later conversation with Marvin Minsky (they chairing my doctoral committee, ca. 1990) that sharpened my understanding of some of the crucial considerations: Regarding goal hierarchies, Marvin remarked that the high-level task of learning language is, for an infant, a subgoal of getting a drink of water, and that converting the resources of the universe into computers is a potential subgoal of a machine attempting to play perfect chess.

 

Fredkin on AI risk in 1979

Recently, Ramez Naam posted What Do AI Researchers Think of the Risks of AI? while guest-blogging at Marginal Revolution. Naam quoted several risk skeptics like Ng and Etzioni, while conspicuously neglecting to mention any prominent AI people who take the risk seriously, such as RussellHorvitz, and Legg. Scott Alexander at Slate Star Codex replied by quoting several prominent AI scientists past and present who seem to have taken the risk seriously. And let’s not forget that the leading AI textbook, by Russell and Norvig, devotes 3.5 pages to potential existential catastrophe from advanced AI, and cites MIRI’s work specifically.

Luckily we can get a clearer picture of current expert opinion by looking at the results of a recent survey which asked the top 100 most-cited living AI scientists when they thought AGI would arrive, how soon after AGI we’d get superintelligence, and what the likely social impact of superintelligence would be.

But at the moment, I just want to mention one additional computer scientist who seems to have been concerned about AI risk for a long time: Ed Fredkin.

In Pamela McCorduck’s history of the first few decades of AI, Machines Who Think (1979), Fredkin is quoted extensively on AI risk. Fredkin said (ch. 14):

Eventually, no matter what we do there’ll be artificial intelligences with independent goals. In pretty much convinced of that. There may be a way to postpone it. There may even be a way to avoid it, I don’t know. But its very hard to have a machine that’s a million times smarter than you as your slave.

…And pulling the plug is no way out. A machine that smart could act in ways that would guarantee that the plug doesn’t get pulled under any circumstances, regardless of its real motives — if it has any.

…I can’t persuade anyone else in the field to worry this way… They get annoyed when I mention these things. They have lots of attitudes, of course, but one of them is, “Well yes, you’re right, but it would be a great disservice to the world to mention all this.”…my colleagues only tell me to wait, not to make my pitch until it’s more obvious that we’ll have artificial intelligences. I think by then it’ll be too late. Once artificial intelligences start getting smart, they’re going to be very smart very fast. What’s taken humans and their society tens of thousands of years is going to be a matter of hours with artificial intelligences. If that happens at Stanford, say, the Stanford AI lab may have immense power all of a sudden. It’s not that the United States might take over the world, it’s that Stanford AI Lab might.

…And so what I’m trying to do is take steps to see that… an international laboratory gets formed, and that these ideas get into the minds of enough people. McCarthy, for lots of reasons, resists this idea, because he thinks the Russians would be untrustworthy in such an enterprise, that they’d swallow as much of the technology as they could, contribute nothing, and meanwhile set up a shadow place of their own running at the exact limit of technology that they could get from the joint effort. And as soon as that made some progress, keep it secret from the rest of us so they could pull ahead… Yes, he might be right, but it doesn’t matter. The international laboratory is by far the best plan; I’ve heard of no better plan. I still would like to see it happen: lets be active instead of passive…

…There are three events of equal importance, if you like. Event one is the creation of the universe. It’s a fairly important event. Event two is the appearance of life. Life is a kind of organizing principle which one might argue against if one didn’t understand enough — shouldn’t or couldn’t happen on thermodynamic grounds, or some such. And, third, there’s the appearance of artificial intelligence. It’s the question which deals with all questions… If there are any questions to be answered, this is how they’ll be answered. There can’t be anything of more consequence to happen on this planet.

Fredkin, now 80, continues to think about AI risk — about the relevance of certification to advanced AI systems, about the race between AI safety knowledge and AI capabilities knowledge, etc. I’d be very curious to learn what Fredkin thinks of the arguments in Superintelligence.

How much recent investment in AI?

Stuart Russell:

Industry [has probably invested] more in the last 5 years than governments have invested since the beginning of the field [in the 1950s].

My guess is that Russell doesn’t have a source for this, and this is just his guess based on his history in the field and his knowledge of what’s been happening lately. But it might very well be true; I’m not sure.

Also see How Big is the Field of Artificial Intelligence?

New Stephen Hawking talk on the future of AI

At Google Zeigeist, Hawking said:

Computers are likely to overtake humans in intelligence at some point in the next hundred years. When that happens, we will need to ensure that the computers have goals aligned with ours.

It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.

Artificial intelligence research is now progressing rapidly. Recent landmarks such as self-driving cars, a computer winning at Jeopardy!, and the digital personal assistants Siri, Google Now, and Cortana are merely symptoms of an IT arms race fueled by unprecedented investments and building on an increasingly mature theoretical foundation. Such achievements will probably pale against what the coming decades will bring.

The potential benefits are huge; everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list. Success in creating AI would be the biggest event in human history.

Unfortunately, it might also be the last, unless we learn how to avoid the risks.

In the near term, world militaries are considering starting an arms race in autonomous-weapon systems that can choose and eliminate their own targets, while the U.N. is debating a treaty banning such weapons. Autonomous-weapon proponents usually forget to ask the most important question: What is the likely endpoint of an arms race, and is that desirable for the human race? Do we really want cheap AI weapons to become the Kalashnikovs of tomorrow, sold to criminals and terrorists on the black market? Given concerns about long-term controllability of ever-more-advanced AI systems, should we arm them, and turn over our defense to them? In 2010, computerized trading systems created the stock market “flash crash.” What would a computer-triggered crash look like in the defense arena? The best time to stop the autonomous-weapons arms race is now.

In the medium-term, AI may automate our jobs, to bring both great prosperity and equality.

Looking further ahead, there are no fundamental limits to what can be achieved. There is no physical law precluding particles from being organized in ways that perform even more advanced computations than the arrangements of particles in human brains.

An explosive transition is possible, although it may play out differently than in the movies. As Irving Good realized in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a singularity. One can imagine such technology out-smarting financial markets, out-inventing human researchers, out-manipulating human leaders, and potentially subduing us with weapons we cannot even understand.

Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

In short, the advent of superintelligent AI would be either the best or the worst thing ever to happen to humanity, so we should plan ahead. If a superior alien civilization sent us a text message saying “We’ll arrive in a few decades,” would we just reply, “OK. Call us when you get here. We’ll leave the lights on.” Probably not, but this is more or less what has happened with AI.

Little serious research has been devoted to these issues, outside a few small nonprofit institutes. Fortunately, this is now changing. Technology pioneers Elon Musk, Bill Gates, and Steve Wozniak have echoed my concerns, and a healthy culture of risk assessment and awareness of societal implications is beginning to take root in the AI community. Many of the world’s leading AI researchers recently signed an open letter calling for the goal of AI to be redefined from simply creating raw, undirected intelligence to creating intelligence directed at benefiting humanity.

The Future of Life Institute, where I serve on the scientific advisory board, has just launched a global research program aimed at keeping AI beneficial.

When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technology such as nuclear weapons, synthetic biology, and strong artificial intelligence, we should instead plan ahead, and aim to get things right the first time, because it may be the only chance we will get.

I’m an optimist, and don’t believe in boundaries, neither for what we can do in our personal lives, nor for what life and intelligence can accomplish in our universe. This means that the brief history of intelligence that I have told you about is not the end of the story, but just the beginning of what I hope will be billions of years of life flourishing in the cosmos.

Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.

Cleverness over content

Norman Finkelstein:

Yeah there’s definitely a place for style and creativity… The problem is when… English majors decide they want to do politics and they have no background in the field of inquiry…

…most people don’t [have expertise] and what you have now is versions of George Packer, Paul Berman. There’s just a large number of people who know nothing about politics, don’t even think it’s important to do the research side. They simply substitute the clever turn of phrase. The main exemplar of that in recent times was Christopher Hitchens who really hadn’t a clue what he was talking about. But what he would do is come up with three arcane facts, and with these three arcane facts he would weave a long essay. So people say, oh look at that. They would react in wonder at one or the other pieces of arcana and then take him for a person who is knowledgable.

People unfortunately don’t care very much about content. They care about cleverness. That’s the basis on which The New York Review of Books recruits its authors, you have to be as they say, a good writer. And the same thing with The New Yorker. Now obviously there’s a great virtue to being a good writer, but not when it’s a substitute for content.

Morris on the great divergence

From Why the West Rules — For Now, ch. 9, on the scientific revolution starting in 17th century Europe:

…contrary to what most of the ancients said, nature was not a living, breathing organism, with desires and intentions. It was actually mechanical. In fact, it was very like a clock. God was a clockmaker, switching on the interlocking gears that made nature run and then stepping back. And if that was so, then humans should be able to disentangle nature’s workings as easily as those of any other mechanism…

…This clockwork model of nature—plus some fiendishly clever experimenting and reasoning—had extraordinary payoffs. Secrets hidden since the dawn of time were abruptly, startlingly, revealed. Air, it turned out, was a substance, not an absence; the heart pumped blood around the body, like a water bellows; and, most bewilderingly, Earth was not the center of the universe.

Simultaneously, in 17th century China:

[A man named Gu Yanwu] turned his back on the metaphysical nitpicking that had dominated intellectual life since the twelfth century and, like Francis Bacon in England, tried instead to understand the world by observing the physical things that real people actually did.

For nearly forty years Gu traveled, filling notebooks with detailed descriptions of farming, mining, and banking. He became famous and others copied him, particularly doctors who had been horrified by their impotence in the face of the epidemics of the 1640s. Collecting case histories of actual sick people, they insisted on testing theories against real results. By the 1690s even the emperor was proclaiming the advantages of “studying the root of a problem, discussing it with ordinary people, and then having it solved.”

Eighteenth-century intellectuals called this approach kaozheng, “evidential research.” It emphasized facts over speculation, bringing methodical, rigorous approaches to fields as diverse as mathematics, astronomy, geography, linguistics, and history, and consistently developing rules for assessing evidence. Kaozheng paralleled western Europe’s scientific revolution in every way—except one: it did not develop a mechanical model of nature.

Like Westerners, Eastern scholars were often disappointed in the learning they had inherited from the last time social development approached the hard ceiling around forty-three points on the index (in their case under the Song dynasty in the eleventh and twelfth centuries). But instead of rejecting its basic premise of a universe motivated by spirit (qi) and imagining instead one running like a machine, Easterners mostly chose to look back to still more venerable authorities, the texts of the ancient Han dynasty.

[Read more…]

Forager Violence and Detroit

Figure 2.1 of Foragers, Farmers, and Fossil Fuels:

Figure 2.1.

Why is Detroit the only city mentioned on a map otherwise dedicated to groups of hunter-gatherers? Is Ian Morris making a joke about Detroit being a neo-primitivist hellscape of poverty and violence?

No, of course not.

Figure 2.1. is just a map of all the locations and social groups mentioned in chapter 2, and it just so happens that Detroit is the only city mentioned. Here’s the context:

Forager bands vary in their use of violence, as they vary in almost everything, but it took anthropologists a long time to realize how rough hunter-gatherers could be. This was not because the ethnographers all got lucky and visited peculiarly peaceful foraging folk, but because the social scale imposed by foraging is so small that even high rates of murder are difficult for outsiders to detect. If a band with a dozen members has a 10 percent rate of violent death, it will suffer roughly one homicide every twenty-five years; and since anthropologists rarely stay in the field for even twenty-five months, they will witness very few violent deaths. It was this demographic reality that led Elizabeth Marshall Thomas to title her sensitive 1959 ethnography of the !Kung The Gentle People — even though their murder rate was much the same as what Detroit would endure at the peak of its crack cocaine epidemic.

Okay, well… I guess that’s sort of like using Detroit as an example of a neo-primitivist hellscape of poverty and violence.

… and in case you think it’s mean of me to pick on Detroit, I’ll mention in my defense that I thought the first season of Silicon Valley was hilarious (I live in the Bay Area), and my girlfriend decided she couldn’t watch it because it was too painfully realistic.

Effective altruism as opportunity or obligation?

Is effective altruism (EA) an opportunity or an obligation? My sense is that Peter Singer, Oxford EAs, and Swiss EAs tend to think of EA as a moral obligation, while GiveWell and other Bay Area EAs are more likely to see EA as a (non-obligatory) exciting opportunity.

In the Harvard Political Review, Ross Rheingans-Yoo recently presented the “exciting opportunity” flavor of EA:

Effective altruism [for many] is an opportunity and a question (I can help! How and where am I needed?), not an obligation and an ideology (You are a monster unless you help this way!), and it certainly does not demand that you sacrifice your own happiness to utilitarian ends. It doesn’t ask anyone to “give until it hurts”; an important piece of living to help others is setting aside enough money to live comfortably (and happily) first, and not feeling bad about living on that.

I tend to think about EA from the “exciting opportunity” perspective, but I think it’s only fair to remember that there is another major school of thought on this, which does argue for EA as a moral obligation, ala Singer’s famous article “Famine, Affluence, and Morality.”

Musk and Gates on superintelligence and fast takeoff

Recently, Baidu CEO Robin Li interviewed Bill Gates and Elon Musk about a range of topics, including machine superintelligence. Here is a transcript of that section of their conversation:

Li: I understand, Elon, that recently you said artificial intelligence advances are like summoning the demon. That generated a lot of hot debate. Baidu’s chief scientist Andrew Ng recently said… that worrying about the dark side of artificial intelligence is like worrying about overpopulation on Mars… He said it’s a distraction to those working on artificial intelligence.

Musk: I think that’s a radically inaccurate analogy, and I know a bit about Mars. The risks of digital superintelligence… and I want you to appreciate that it wouldn’t just be human-level, it would be superhuman almost immediately; it would just zip right past humans to be way beyond anything we could really imagine.

A more perfect analogy would be if you consider nuclear research, with its potential for a very dangerous weapon. Releasing the energy is easy; containing that energy safely is very difficult. And so I think the right emphasis for AI research is on AI safety. We should put vastly more effort into AI safety than we should into advancing AI in the first place. Because it may be good, or it may be bad. And it could be catastrophically bad if there could be the equivalent to a nuclear meltdown. So you really want to emphasize safety.

So I’m not against the advancement of AI… but I do think we should be extremely careful. And if that means that it takes a bit longer to develop AI, then I think that’s the right trail. We shouldn’t be rushing headlong into something we don’t understand.

Li: Bill, I know you share similar views with Elon, but is there any difference between you and him?

Gates: I don’t think so. I mean he actually put some money out to help get somewhere going on this, and I think that’s absolutely fantastic. For people in the audience who want to read about this, I highly recommend this Bostrom book called Superintelligence

We have a general purpose learning algorithm that evolution has endowed us with, and it’s running in an extremely slow computer. Very limited memory size, ability to send data to other computers, we have to use this funny mouth thing here… Whenever we build a new one it starts over and it doesn’t know how to walk. So believe me, as soon as this algorithm [points to head], taking experience and turning it into knowledge, which is so amazing and which we have not done in software, as soon as you do that, it’s not clear you’ll even know when you’re just at the human level. You’ll be at the superhuman level almost as soon as that algorithm is implanted, in silicon. And actually as time goes by that silicon piece is ready to be implanted, the amount of knowledge, as soon as it has that learning algorithm it just goes out on the internet and reads all the magazine and books… we have essentially been building the content based for the super intelligence.

So I try not to get too exercised about this but when people say it’s not a problem, then I really start to [shakes head] get to a point of disagreement. How can they not see what a huge challenge this is?

Michael Oppenheimer on how the IPCC got started

Interesting:

US support was probably critical to IPCC’s establishment. And why did the US government support it? Assistant Undersecretary of State Bill Nitze wrote to me a few years later saying that our group’s activities played a significant role. Among other motivations, the US government saw the creation of the IPCC as a way to prevent the activism stimulated by my colleagues and me from controlling the policy agenda.

I suspect that the Reagan Administration believed that, in contrast to our group, most scientists were not activists, and would take years to reach any conclusion on the magnitude of the threat. Even if they did, they probably would fail to express it in plain English. The US government must have been quite surprised when IPCC issued its first assessment at the end of 1990, stating clearly that human activity was likely to produce an unprecedented warming.

The IPCC’s first assessment laid the groundwork for negotiation of the UN Framework Convention on Climate Change (UNFCCC), signed at the Earth Summit in 1992. In a sense, the UNFCCC and its progeny, the Kyoto Protocol, were unintended consequences of the US support for establishment of IPCC – not what the Reagan Administration had in mind!

Modern classical music, summed up in one paragraph

From Burkholder et al.’s History of Western Music, 9th edition (pp. 938-939):

The demands on performers by composers like Berio and Carter [and] Babbitt, Stockhausen, and Boulez, were matched by their demands on listeners. Each piece was difficult to understand in its own right, using a novel musical language even more distant from the staples of the concert repertoire than earlier modernist music had been. Compounding listeners’ difficulties was that each composer and often each piece used a unique approach, so that even after getting to know one such work, encountering the next one could be like starting from scratch.

Harari on the great divergence

Harari again:

… if in 1770 Europeans had no significant technological advantage over Muslims, Indians and Chinese, how did they manage in the following century to open such a gap between themselves and the rest of the world?

… After all, the technology of the first industrial wave was relatively simple. Was it so hard for Chinese or Ottomans to engineer steam engines, manufacture machine guns and lay down railroads?

The world’s first commercial railroad opened for business in 1830, in Britain. By 1850, Western nations were criss-crossed by almost 40,000 kilometres of railroads – but in the whole of Asia, Africa and Latin America there were only 4,000 kilometres of tracks. In 1880, the West boasted more than 350,000 kilometres of railroads, whereas in the rest of the world there were but 35,000 kilometres of train lines (and most of these were laid by the British in India). The first railroad in China opened only in 1876. It was twenty-five kilometres long and built by Europeans – the Chinese government destroyed it the following year. In 1880 the Chinese Empire did not operate a single railroad. The first railroad in Persia was built only in 1888, and it connected Tehran with a Muslim holy site about ten kilometres south of the capital. It was constructed and operated by a Belgian company. In 1950, the total railway network of Persia still amounted to a meagre 2,500 kilometres, in a country seven times the size of Britain.

The Chinese and Persians did not lack technological inventions such as steam engines (which could be freely copied or bought). They lacked the values, myths, judicial apparatus and sociopolitical structures that took centuries to form and mature in the West and which could not be copied and internalised rapidly. France and the United States quickly followed in Britain’s footsteps because the French and Americans already shared the most important British myths and social structures. The Chinese and Persians could not catch up as quickly because they thought and organised their societies differently.

This explanation sheds new light on the period from 1500 to 1850. During this era Europe did not enjoy any obvious technological, political, military or economic advantage over the Asian powers, yet the continent built up a unique potential, whose importance suddenly became obvious around 1850. The apparent equality between Europe, China and the Muslim world in 1750 was a mirage. Imagine two builders, each busy constructing very tall towers. One builder uses wood and mud bricks, whereas the other uses steel and concrete. At first it seems that there is not much of a difference between the two methods, since both towers grow at a similar pace and reach a similar height. However, once a critical threshold is crossed, the wood and mud tower cannot stand the strain and collapses, whereas the steel and concrete tower grows storey by storey, as far as the eye can see.

What potential did Europe develop in the early modern period that enabled it to dominate the late modern world? There are two complementary answers to this question: modern science and capitalism. Europeans were used to thinking and behaving in a scientific and capitalist way even before they enjoyed any significant technological advantages. When the technological bonanza began, Europeans could harness it far better than anybody else.

Chapters 16 and 17 defend this thesis. I don’t quite agree with all of the above, nor do I agree entirely with his version of this thesis, but nevertheless it might be the best popular exposition of the thesis I’ve seen yet.

Harari on what is natural

Harari, Sapiens:

A good rule of thumb is ‘Biology enables, Culture forbids.’ Biology is willing to tolerate a very wide spectrum of possibilities. It’s culture that obliges people to realise some possibilities while forbidding others. Biology enables women to have children – some cultures oblige women to realise this possibility. Biology enables men to enjoy sex with one another – some cultures forbid them to realise this possibility.

Culture tends to argue that it forbids only that which is unnatural. But from a biological perspective, nothing is unnatural. Whatever is possible is by definition also natural. A truly unnatural behaviour, one that goes against the laws of nature, simply cannot exist, so it would need no prohibition. No culture has ever bothered to forbid men to photosynthesise, women to run faster than the speed of light, or negatively charged electrons to be attracted to each other.

In truth, our concepts ‘natural’ and unnatural’ are taken not from biology, but from Christian theology. The theological meaning of ‘natural’ is ‘in accordance with the intentions of the God who created nature’. Christian theologians argued that God created the human body, intending each limb and organ to serve a particular purpose. If we use our limbs and organs for the purpose envisioned by God, then it is a natural activity. To use them differently than God intends is unnatural. But evolution has no purpose. Organs have not evolved with a purpose, and the way they are used is in constant flux. There is not a single organ in the human body that only does the job its prototype did when it first appeared hundreds of millions of years ago. Organs evolve to perform a particular function, but once they exist, they can be adapted for other usages as well. Mouths, for example, appeared because the earliest multicellular organisms needed a way to take nutrients into their bodies. We still use our mouths for that purpose, but we also use them to kiss, speak and, if we are Rambo, to pull the pins out of hand grenades. Are any of these uses unnatural simply because our worm-like ancestors 600 million years ago didn’t do those things with their mouths?

 

Human progress before the industrial revolution

Despite its prominence in Malthus’s and Cook’s work, social scientists interested in long-term economic history regularly ignore the food/nonfood calories distinction and, focusing solely on food, conclude that between the invention of agriculture more than ten thousand years ago and the industrial revolution two hundred years ago not very much happened. In one of the most widely cited recent discussions, the economic historian Gregory Clark explicitly suggested that “the average person in the world of 1800 [CE] was no better off than the average person of 100,000 BC.”

But this is mistaken. As Malthus recognized, if good weather or advances in technology or organization raised food output, population did tend to expand to consume the surplus, forcing people to consume fewer and cheaper food calories; but despite the downward pressure on per capita food supply, increases in nonfood energy capture have, in the long run, steadily accumulated throughout Holocene times.

Ian Morris, The Measure of Civilization