Rapoport’s First Rule and Efficient Reading

Philosopher Daniel Dennett advocates following “Rapoport’s Rules” when writing critical commentary. He summarizes the first of Rapoport’s Rules this way:

You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”

If you’ve read many scientific and philosophical debates, you’re aware that this rule is almost never followed. And in many cases it may be inappropriate, or not worth the cost, to follow it. But for someone like me, who spends a lot of time trying to quickly form initial impressions about the state of various scientific or philosophical debates, it can be incredibly valuable and time-saving to find a writer who follows Rapoport’s First Rule, even if I end up disagreeing with that writer’s conclusions.

One writer who, in my opinion, seems to follow Rapoport’s First Rule unusually well is Dennett’s “arch-nemesis” on the topic of consciousness, the philosopher David Chalmers. Amazingly, even Dennett seems to think that Chalmers embodies Rapoport’s 1st Rule. Dennett writes:

Chalmers manifestly understands the arguments [for and against type-A materialism, which is Dennett’s view]; he has put them as well and as carefully as anybody ever has… he has presented excellent versions of [the arguments for type-A materialism] himself, and failed to convince himself. I do not mind conceding that I could not have done as good a job, let alone a better job, of marshaling the grounds for type-A materialism. So why does he cling like a limpet to his property dualism?

As far as I can tell, Dennett is saying “Thanks, Chalmers, I wish I’d thought of putting the arguments for my view that way.”

And because of Chalmers’ clarity and fairness, I have found Chalmers’ writings on consciousness to be more efficiently informative than Dennett’s, even though my own current best-guesses about the nature of consciousness are much closer to Dennett’s than to Chalmers’.

Contrast this with what I find to be more typical in the consciousness literature (and in many other literatures), which is for an article’s author(s) to present as many arguments as they can think of for their own view, and downplay or mischaracterize or not-even-mention the arguments against their view.

I’ll describe one example, without naming names. Recently I read two recent papers, each of which had a section discussing the evidence for or against the “cortex-required view,” which is the view that a cortex is required for phenomenal consciousness. (I’ll abbreviate it as “CRV.”)

The pro-CRV paper is written as though it’s a closed case that a cortex is required for consciousness, and it doesn’t cite any of the literature suggesting the opposite. Meanwhile, the anti-CRV paper is written as though it’s a closed case that a cortex isn’t required for consciousness, and it doesn’t cite any literature suggesting that it is required. Their differing passages on CRV cite literally zero of the same sources. Each paper pretends as though the entire body of literature cited by the other paper just doesn’t exist.

If you happened to read only one of these papers, you’d come a way with a very skewed view of the likelihood of the cortex-required view. You might realize how skewed that view is later, but if you’re reading only a few papers on the topic, so that you can form an impression quickly, you might not.

So here’s one tip for digging through some literature quickly: try to find out which expert(s) on that topic, if any, seem to follow Rapoport’s First Rule — even if you don’t find their conclusions compelling.

Seeking case studies in scientific reduction and conceptual evolution

Tim Minchin once said “Every mystery ever solved has turned out to be not magic.” One thing I want to understand better is “How, exactly, has that happened in history? In particular, how have our naive pre-scientific concepts evolved in response to, or been eliminated by, scientific progress?

Examples: What is the detailed story of how “water” came to be identified with H2O? How did our concept of “heat” evolve over time, including e.g. when we split it off from our concept of “temperature”? What is the detailed story of how “life” came to be identified with a large set of interacting processes with unclear edge cases such as viruses decided only by convention? What is the detailed story of how “soul” was eliminated from our scientific ontology rather than being remapped onto something “conceptually close” to our earlier conception of it, but which actually exists?

I wish there was a handbook of detailed case studies in scientific reductionism from a variety of scientific disciplines, but I haven’t found any such book yet. The documents I’ve found that are closest to what I want are perhaps:

Some semi-detailed case studies also show up in Kuhn, Feyerabend, etc. but they are typically buried in a mass of more theoretical discussion. I’d prefer to read histories that focus on the historical developments.

Got any such case studies, or collections of case studies, to recommend?

Books, music, etc. from June 2016

Books

  • Mukherjee, The Gene [meh]
  • Balcombe, What a Fish Knows [meh]

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

  • Haigh, 45 Years (2015)
  • Strong & Lyn, Broadchurch Season 1 (2013)
  • Eggers, The Witch (2015)
  • Haynes, Carol (2015)
  • Nichols, Midnight Special (2016)
  • Various, Game of Thrones, Season 6 (2016)
  • Nemes, Son of Saul (2015)

Media I’m looking forward to, July 2016 edition

Books

* = added this round

Movies

(only including movies which AFAIK have at least started principal photography)

  • Liu, Batman: The Killing Joke (Jul 2016)
  • Cianfrance, The Light Between Oceans (Sep 2016)
  • *Malick, Voyage of Time (Oct 2016)
  • Lonergan, Manchester by the Sea (Nov 2016)
  • Nichols, Loving (Nov 2016)
  • Scorcese, Silence (Nov 2016)
  • Edwards, Rogue One (Dec 2016)
  • Villeneuve, Story of Your Life (TBD 2016)
  • Dardenne brothers, The Unknown Girl (TBD 2016)
  • Swanberg, Win It All (TBD 2016)
  • Farhadi, The Salesman (TBD 2016)
  • Reeves, War for the Planet of the Apes (Jul 2017)
  • Nolan, Dunkirk (Jul 2017)
  • Unkrich, Coco (Nov 2017)
  • Johnson, Star Wars: Episode VIII (Dec 2017)
  • Payne, Downsizing (Dec 2017)

Stich on conceptual analysis

Stich (1990), p. 3:

On the few occasions when I have taught the “analysis of knowledge” literature to undergraduates, it has been painfully clear that most of my students had a hard time taking the project seriously. The better students were clever enough to play fill-in-the-blank with ‘S knows that p if and only _____’ … But they could not, for the life of them, see why anybody would want to do this. It was a source of ill-concealed amazement to these students that grown men and women would indulge in this exercise and think it important — and of still greater amazement that others would pay them to do it! This sort of discontent was all the more disquieting because deep down I agreed with my students. Surely something had gone very wrong somewhere when clever philosophers, the heirs to the tradition of Hume and Kant, devoted their time to constructing baroque counterexamples about the weird ways in which a man might fail to own a Ford… for about as long as I can remember I have had deep…misgivings about the project of analyzing epistemic notions.

Books, music, etc. from May 2016

Books

  • Christian & Griffiths, Algorithms to Live By
  • Dennett & LaScola, Caught in the Pulpit
  • Carroll, The Big Picture [my comments]
  • de Waal, Are We Smart Enough to Know How Smart Animals Are?
  • Dreger, Galileo’s Middle Finger [a quoted passage]

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

  • Lanthimos, The Lobster (2015)
  • Jones, The Homesman (2014)
  • Sciamma, Girlhood (2014)
  • Dumont, Li’l Quinquin (2014)
  • Aubier & Patar, Ernest & Celestine (2012)

Media I’m looking forward to, June 2016 edition

Books

* = added this round

Movies

(only including movies which AFAIK have at least started principal photography)

  • Stanton, Finding Dory (Jun 2016)
  • Liu, Batman: The Killing Joke (Jul 2016)
  • Cianfrance, The Light Between Oceans (Sep 2016)
  • Lonergan, Manchester by the Sea (Nov 2016)
  • Nichols, Loving (Nov 2016)
  • Scorcese, Silence (Nov 2016)
  • Edwards, Rogue One (Dec 2016)
  • Villeneuve, Story of Your Life (TBD 2016)
  • Dardenne brothers, The Unknown Girl (TBD 2016)
  • Swanberg, Win It All (TBD 2016)
  • Farhadi, The Salesman (TBD 2016)
  • Reeves, War for the Planet of the Apes (Jul 2017)
  • *Nolan, Dunkirk (Jul 2017)
  • Unkrich, Coco (Nov 2017)
  • Johnson, Star Wars: Episode VIII (Dec 2017)
  • Payne, Downsizing (Dec 2017)

Social justice and evidence

Galileo’s Middle Finger has some good coverage of several case studies in politicized science, and ends with a sermon on the importance of evidence to social justice:

 

When I joined the early intersex-rights movement, although identity activists inside and outside academia were a dime a dozen, it was pretty uncommon to run into evidence-based activists… Today, all over the place, one finds activist groups collecting and understanding data, whether they’re working on climate change or the rights of patients, voters, the poor, LGBT people, or the wrongly imprisoned…

The bad news is that today advocacy and scholarship both face serious threats. As for social activism, while the Internet has made it cheaper and easier than ever to organize and agitate, it also produces distraction and false senses of success. People tweet, blog, post messages on walls, and sign online petitions, thinking somehow that noise is change. Meanwhile, the people in power just wait it out, knowing that the attention deficit caused by Internet overload will mean the mob will move on to the next house tomorrow, sure as the sun comes up in the morning. And the economic collapse of the investigative press caused by that noisy Internet means no one on the outside will follow through to sort it out, to tell us what is real and what is illusory. The press is no longer around to investigate, spread stories beyond the already aware, and put pressure on those in power to do the right thing.

The threats to scholars, meanwhile, are enormous and growing. Today over half of American university faculty are in non-tenure-track jobs. (Most have not consciously chosen to live without tenure, as I have.) Not only are these people easy to get rid of if they make trouble, but they are also typically loaded with enough teaching and committee work to make original scholarship almost impossible… Add to this the often unfair Internet-based attacks on researchers who are perceived as promoting dangerous messages, and what you end up with is regression to the safe — a recipe for service of those already in power.

Perhaps most troubling is the tendency within some branches of the humanities to portray scholarly quests to understand reality as quaint or naive, even colonialist and dangerous. Sure, I know: Objectivity is easily desired and impossible to perfectly achieve, and some forms of scholarship will feed oppression, but to treat those who seek a more objective understanding of a problem as fools or de facto criminals is to betray the very idea of an academy of learners. When I run into such academics — people who will ignore and, if necessary, outright reject any fact that might challenge their ideology, who declare scientific methodologies “just another way of knowing” — I feel this crazy desire to institute a purge… Call me ideological for wanting us all to share a belief in the importance of seeking reliable, verifiable knowledge, but surely that is supposed to be the common value of the learned.

…I want to say to activists: If you want justice, support the search for truth. Engage in searches for truth. If you really want meaningful progress and not just temporary self-righteousness, carpe datum. You can begin with principles, yes, but to pursue a principle effectively, you have to know if your route will lead to your destination. If you must criticize scholars whose work challenges yours, do so on the evidence, not by poisoning the land on which we all live.

…Here’s the one thing I now know for sure after this very long trip: Evidence really is an ethical issue, the most important ethical issue in a modern democracy. If you want justice, you must work for truth.

Naturally, the sermon is more potent if you’ve read the case studies in the book.

The Big Picture

Sean Carroll’s The Big Picture is a pretty decent “worldview naturalism 101” book.

In case there’s a 2nd edition in the future, and in case Carroll cares about the opinions of a professional dilettante (aka a generalist research analyst without even a bachelor’s degree), here are my requests for the 2nd edition:

  • I think Carroll is too quick to say which physicalist approach to phenomenal consciousness is correct, and doesn’t present alternate approaches as compellingly as he could (before explaining why he rejects them). (See especially chs. 41-42.)
  • In the chapter on death, I wish Carroll had acknowledged that neither physics nor naturalism requires that we live lives as short as we now do, and that there are speculative future technological capabilities that might allow future humans (or perhaps some now living) to live very long lives (albeit not infinitely long lives).
  • I wish Carroll had mentioned Tegmark levels, maybe in chs. 25 or 36.

Check the original source

From Segerstrale (2000), p. 27:

In 1984 I was able to shock my class of well-intended liberal students at Smith College by giving them the assignment to compare [Stephan] Chorover’s [critical] representation of passages of [E.O. Wilson’s] Sociobiology with Wilson’s original text. The students, who were deeply suspicious of Wilson and spontaneous champions of his critics, embarked on this homework with gusto. Many students were quite dismayed at their own findings and angry with Chorover. This surely says something, too, about these educated laymen’s relative innocence regarding what can and cannot be done in academia.

I wish this kind of exercise was more common. Another I would suggest is to compare critics’ representations of Dreyfus’ “Alchemy and Artificial Intelligence” with the original text (see here).

The first AI textbook, on the control problem

The earliest introductory AI textbook I know about — excluding mere “paper collections” like Computers and Thought (1963) — is Jackson’s Introduction to Artificial Intelligence (1974).

It discusses AGI and the control problem starting on page 394:

If [AI] research is unsuccessful at producing a general artificial intelligence, over a period of more than a hundred years, then its failure may raise some serious doubt among many scientists as to the finite describability of man and his universe. However, the evidence presented in this book makes it seem likely that artificial intelligence research will be successful, that a technology will be developed which is capable of producing machines that can demonstrate most, if not all, of the mental abilities of human beings. Let us therefore assume that this will happen, and imagine two worlds that might result.

[First,] …It is not difficult to envision actualities in which an artificial intelligence would exert control over human beings, yet be out of their control.

Given that intelligent machines are to be used, the question of their control and noncontrol must be answered. If a machine is programmed to seek certain ends, how are we to insure that the means it chooses to employ are agreeable to people? A preliminary solution to the problem is given by the fact that we can specify state-space problems to require that their solution paths shall not pass through certain states (see Chapter 3). However, the task of giving machines more sophisticated value systems, and especially of making them ‘ethical,’ has not yet been investigated by AI researchers…

The question of control should be coupled with the ‘lack of understanding’ question; that is, the possibility exists that intelligent machines might be too complicated for us to understand in situations that require real-time analyses (see the discussion of evolutionary programs in Chapter 8). We could conceivably always demand that a machine give a complete output of its reasoning on a problem; nevertheless that reasoning might not be effectively understandable to us if the problem itself were to determine a time limit for producing a solution. In such a case, if we were to act rationally, we might have to follow the machine’s advice without understanding its ‘motives’…

It has been suggested that an intelligent machine might arise accidentally, without our knowledge, through some fortuitous interconnection of smaller machines (see Heinlein, 1966). If the smaller machines each helped to control some aspect of our economy or defense, the accidental intelligent might well act as a dictator… It seems highly unlikely that this will happen, especially if we devote sufficient time to studying the non-accidental systems we implement.

A more significant danger is that artificial intelligence might be used to further the interests of human dictators. A limited supply of intelligent machines in the hands of a human dictator might greatly increase his power over other human beings, perhaps to the extent of giving him complete censorship and supervision of the public…

Let us now paint another, more positive picture of the world that might result from artificial intelligence research… It is a world in which man and his machines have reached a state of symbiosis…

The benefits humanity might gain from achieving such a symbiosis are enormous. As mentioned [earlier], it may be possible for artificial intelligence to greatly reduce the amount of human labor necessary to operate the economy of the world… Computers and AI research may play an important part in helping to overcome the food, population, housing, and other crises that currently grip the earth… Artificial intelligence may eventually be used to… partially automated the development of science itself… Perhaps artificial intelligence will someday be used in automatic teachers… and perhaps mechanical translators will someday be developed which will fluently translate human languages. And (very perhaps) the day may eventually come when the ‘household robot’ and the ‘robot chauffeur’ will be a reality…

In some ways it is reassuring that the progress in artificial intelligence research is proceeding at a relatively slow but regular pace. It should be at least a decade before any of these possibilities becomes an actuality, which will give us some time to consider in more detail the issues involved.

“Beyond the scope of this paper”

From AI scientist Drew McDermott, in 1976:

In this paper I have criticized AI researchers very harshly. Let me express my faith that people in other fields would, on inspection, be found to suffer from equally bad faults. Most AI workers are responsible people who are aware of the pitfalls of a difficult field and produce good work in spite of them. However, to say anything good about anyone is beyond the scope of this paper.

Books, music, etc. from April 2016

Books

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

Media I’m looking forward to, May 2016 edition

Books

* = added this round

Movies

(only including movies which AFAIK have at least started principal photography)

  • Russo & Russo, Captain America: Civil War (May 2016)
  • Stanton, Finding Dory (Jun 2016)
  • Liu, Batman: The Killing Joke (Jul 2016)
  • Cianfrance, The Light Between Oceans (Sep 2016)
  • Lonergan, Manchester by the Sea (Nov 2016)
  • Nichols, Loving (Nov 2016)
  • Scorcese, Silence (Nov 2016)
  • Edwards, Rogue One (Dec 2016)
  • Villeneuve, Story of Your Life (TBD 2016)
  • Dardenne brothers, The Unknown Girl (TBD 2016)
  • Swanberg, Win It All (TBD 2016)
  • Farhadi, The Salesman (TBD 2016)
  • Reeves, War for the Planet of the Apes (Jul 2017)
  • Unkrich, Coco (Nov 2017)
  • Johnson, Star Wars: Episode VIII (Dec 2017)
  • Payne, Downsizing (Dec 2017)

Scaruffi on art music

From the preface to his in-progress history of avant-garde music:

Art Music (or Sound Art) differs from Commercial Music the way a Monet painting differs from IKEA furniture. Although the border is frequently fuzzy, there are obvious differences in the lifestyles and careers of the practitioners. Given that Art Music represents (at best) 3% of all music revenues, the question is why anyone would want to be an art musician at all. It is like asking why anyone would want to be a scientist instead of joining a technology startup. There are pros that are not obvious if one only looks at the macroscopic numbers. To start with, not many commercial musicians benefit from that potentially very lucrative market. In fact, the vast majority live a rather miserable existence. Secondly, commercial music frequently implies a lifestyle of time-consuming gigs in unattractive establishments. But fundamentally being an art musician is a different kind of job, more similar to the job of the scientific laboratory researcher (and of the old-fashioned inventor) than to the job of the popular entertainer. The art musician is pursuing a research program that will be appreciated mainly by his peers and by the “critics” (who function as historians of music), not by the public. The art musician is not a product to be sold in supermarkets but an auteur. The goal of an art musician is, first and foremost, to do what s/he feels is important and, secondly, to secure a place in the history of human civilization. Commercial musicians live to earn a good life. Art musicians live to earn immortality. (Ironically, now that we entered the age of the mass market, a pop star may be more likely to earn immortality than the next Beethoven, but that’s another story). Art music knows no stylistic boundaries: the division in classical, jazz, rock, hip hop and so forth still makes sense for commercial music (it basically identifies the sales channel) but ever less sense for art music whose production, distribution and appreciation methods are roughly the same regardless of whether the musician studied in a Conservatory, practiced in a loft or recorded at home using a laptop.

Medical ghostwriting

From Mushak & Elliott (2015):

Pharmaceutical companies hire “medical education and communication companies” (MECCs) to create sets of journal articles (and even new journals) designed to place their drugs in a favorable light and to assist in their marketing efforts (Sismondo 2007, 2009; Elliott 2010). These articles are frequently submitted to journals under the names of prominent academic researchers, but the articles are actually written by employees of the MECCs (Sismondo 2007, 2009). While it is obviously difficult to determine what proportion of the medical literature is produced in this fashion, one study used information uncovered in litigation to determine that more than half of the articles published on the antidepressant Zoloft between 1998 and 2000 were ghostwritten (Healy and Cattell 2003). These articles were published in more prestigious journals than the non-ghostwritten articles and were cited five times more often. Significantly, they also painted a rosier picture of Zoloft than the others.

Books, music, etc. from March 2016

Books

Music

Music I most enjoyed discovering this month:

Movies/TV

Ones I really liked, or loved:

  • [none this month]

Some books I’m looking forward to, April 2016 edition

* = added this round

CGP Grey on Superintelligence

CGP Grey recommends Nick Bostrom’s Superintelligence:

The reason this book [Superintelligence]… has stuck with me is because I have found my mind changed on this topic, somewhat against my will.

…For almost all of my life… I would’ve placed myself very strongly in the camp of techno-optimists. More technology, faster… it’s nothing but sunshine and rainbows ahead… When people would talk about the “rise of the machines”… I was always very dismissive of this, in no small part because those movies are ridiculous… [and] I was never convinced there was any kind of problem here.

But [Superintelligence] changed my mind so that I am now much more in the camp of [thinking that the development of general-purpose AI] can seriously present an existential threat to humanity, in the same way that an asteroid collision… is what you’d classify as a serious existential threat to humanity — like, it’s just over for people.

…I keep thinking about this because I’m uncomfortable with having this opinion. Like, sometimes your mind changes and you don’t want it to change, and I feel like “Boy, I liked it much better when I just thought that the future was always going to be great and there’s not any kind of problem”…

…The thing about this book that I found really convincing is that it used no metaphors at all. It was one of these books which laid out its basic assumptions, and then just follows them through to a conclusion… The book is just very thorough at trying to go down every path and every combination of [assumptions], and what I realized was… “Oh, I just never did sit down and think through this position [that it will eventually be possible to build general-purpose AI] to its logical conclusion.”

Another interesting section begins at 1:46:35 and runs through about 1:52:00.