Morris on the great divergence

From Why the West Rules — For Now, ch. 9, on the scientific revolution starting in 17th century Europe:

…contrary to what most of the ancients said, nature was not a living, breathing organism, with desires and intentions. It was actually mechanical. In fact, it was very like a clock. God was a clockmaker, switching on the interlocking gears that made nature run and then stepping back. And if that was so, then humans should be able to disentangle nature’s workings as easily as those of any other mechanism…

…This clockwork model of nature—plus some fiendishly clever experimenting and reasoning—had extraordinary payoffs. Secrets hidden since the dawn of time were abruptly, startlingly, revealed. Air, it turned out, was a substance, not an absence; the heart pumped blood around the body, like a water bellows; and, most bewilderingly, Earth was not the center of the universe.

Simultaneously, in 17th century China:

[A man named Gu Yanwu] turned his back on the metaphysical nitpicking that had dominated intellectual life since the twelfth century and, like Francis Bacon in England, tried instead to understand the world by observing the physical things that real people actually did.

For nearly forty years Gu traveled, filling notebooks with detailed descriptions of farming, mining, and banking. He became famous and others copied him, particularly doctors who had been horrified by their impotence in the face of the epidemics of the 1640s. Collecting case histories of actual sick people, they insisted on testing theories against real results. By the 1690s even the emperor was proclaiming the advantages of “studying the root of a problem, discussing it with ordinary people, and then having it solved.”

Eighteenth-century intellectuals called this approach kaozheng, “evidential research.” It emphasized facts over speculation, bringing methodical, rigorous approaches to fields as diverse as mathematics, astronomy, geography, linguistics, and history, and consistently developing rules for assessing evidence. Kaozheng paralleled western Europe’s scientific revolution in every way—except one: it did not develop a mechanical model of nature.

Like Westerners, Eastern scholars were often disappointed in the learning they had inherited from the last time social development approached the hard ceiling around forty-three points on the index (in their case under the Song dynasty in the eleventh and twelfth centuries). But instead of rejecting its basic premise of a universe motivated by spirit (qi) and imagining instead one running like a machine, Easterners mostly chose to look back to still more venerable authorities, the texts of the ancient Han dynasty.

[Read more...]

April links

This is why I consume so many books and articles even though I don’t remember most of their specific content when asked about them. I’m also usually doing a breadth-first search to find candidates that might be worth a deep dive. E.g. I think I found Ian Morris faster than I otherwise would have because I’ve been doing breadth-first search.

Notable lessons (so far) from the Open Philanthropy Project.

A prediction market for behavioral economics replication attempts.

Towards a 21st century orchestral canon. And a playlist resulting from that discussion.


AI stuff

Tegmark, Russell, and Horvitz on the future of AI on Science Friday.

Eric Drexler has a new FHI technical report on superintelligence safety.

Brookings Institute blog post about AI safety, regulation, and superintelligence.

Forager Violence and Detroit

Figure 2.1 of Foragers, Farmers, and Fossil Fuels:

Figure 2.1.

Why is Detroit the only city mentioned on a map otherwise dedicated to groups of hunter-gatherers? Is Ian Morris making a joke about Detroit being a neo-primitivist hellscape of poverty and violence?

No, of course not.

Figure 2.1. is just a map of all the locations and social groups mentioned in chapter 2, and it just so happens that Detroit is the only city mentioned. Here’s the context:

Forager bands vary in their use of violence, as they vary in almost everything, but it took anthropologists a long time to realize how rough hunter-gatherers could be. This was not because the ethnographers all got lucky and visited peculiarly peaceful foraging folk, but because the social scale imposed by foraging is so small that even high rates of murder are difficult for outsiders to detect. If a band with a dozen members has a 10 percent rate of violent death, it will suffer roughly one homicide every twenty-five years; and since anthropologists rarely stay in the field for even twenty-five months, they will witness very few violent deaths. It was this demographic reality that led Elizabeth Marshall Thomas to title her sensitive 1959 ethnography of the !Kung The Gentle People — even though their murder rate was much the same as what Detroit would endure at the peak of its crack cocaine epidemic.

Okay, well… I guess that’s sort of like using Detroit as an example of a neo-primitivist hellscape of poverty and violence.

…and in case you think it’s mean of me to pick on Detroit, I’ll mention in my defense that I thought the first season of Silicon Valley was hilarious (I live in the Bay Area), and my girlfriend decided she couldn’t watch it because it was too painfully realistic.

Effective altruism as opportunity or obligation?

Is effective altruism (EA) an opportunity or an obligation? My sense is that Peter Singer, Oxford EAs, and Swiss EAs tend to think of EA as a moral obligation, while GiveWell and other Bay Area EAs are more likely to see EA as a (non-obligatory) exciting opportunity.

In the Harvard Political Review, Ross Rheingans-Yoo recently presented the “exciting opportunity” flavor of EA:

Effective altruism [for many] is an opportunity and a question (I can help! How and where am I needed?), not an obligation and an ideology (You are a monster unless you help this way!), and it certainly does not demand that you sacrifice your own happiness to utilitarian ends. It doesn’t ask anyone to “give until it hurts”; an important piece of living to help others is setting aside enough money to live comfortably (and happily) first, and not feeling bad about living on that.

I tend to think about EA from the “exciting opportunity” perspective, but I think it’s only fair to remember that there is another major school of thought on this, which does argue for EA as a moral obligation, ala Singer’s famous article “Famine, Affluence, and Morality.”

Musk and Gates on superintelligence and fast takeoff

Recently, Baidu CEO Robin Li interviewed Bill Gates and Elon Musk about a range of topics, including machine superintelligence. Here is a transcript of that section of their conversation:

Li: I understand, Elon, that recently you said artificial intelligence advances are like summoning the demon. That generated a lot of hot debate. Baidu’s chief scientist Andrew Ng recently said… that worrying about the dark side of artificial intelligence is like worrying about overpopulation on Mars… He said it’s a distraction to those working on artificial intelligence.

Musk: I think that’s a radically inaccurate analogy, and I know a bit about Mars. The risks of digital superintelligence… and I want you to appreciate that it wouldn’t just be human-level, it would be superhuman almost immediately; it would just zip right past humans to be way beyond anything we could really imagine.

A more perfect analogy would be if you consider nuclear research, with its potential for a very dangerous weapon. Releasing the energy is easy; containing that energy safely is very difficult. And so I think the right emphasis for AI research is on AI safety. We should put vastly more effort into AI safety than we should into advancing AI in the first place. Because it may be good, or it may be bad. And it could be catastrophically bad if there could be the equivalent to a nuclear meltdown. So you really want to emphasize safety.

So I’m not against the advancement of AI… but I do think we should be extremely careful. And if that means that it takes a bit longer to develop AI, then I think that’s the right trail. We shouldn’t be rushing headlong into something we don’t understand.

Li: Bill, I know you share similar views with Elon, but is there any difference between you and him?

Gates: I don’t think so. I mean he actually put some money out to help get somewhere going on this, and I think that’s absolutely fantastic. For people in the audience who want to read about this, I highly recommend this Bostrom book called Superintelligence

We have a general purpose learning algorithm that evolution has endowed us with, and it’s running in an extremely slow computer. Very limited memory size, ability to send data to other computers, we have to use this funny mouth thing here… Whenever we build a new one it starts over and it doesn’t know how to walk. So believe me, as soon as this algorithm [points to head], taking experience and turning it into knowledge, which is so amazing and which we have not done in software, as soon as you do that, it’s not clear you’ll even know when you’re just at the human level. You’ll be at the superhuman level almost as soon as that algorithm is implanted, in silicon. And actually as time goes by that silicon piece is ready to be implanted, the amount of knowledge, as soon as it has that learning algorithm it just goes out on the internet and reads all the magazine and books… we have essentially been building the content based for the super intelligence.

So I try not to get too exercised about this but when people say it’s not a problem, then I really start to [shakes head] get to a point of disagreement. How can they not see what a huge challenge this is?