A comic about Herman Kahn, near the height of his celebrity:
Galileo’s Middle Finger has some good coverage of several case studies in politicized science, and ends with a sermon on the importance of evidence to social justice:
When I joined the early intersex-rights movement, although identity activists inside and outside academia were a dime a dozen, it was pretty uncommon to run into evidence-based activists… Today, all over the place, one finds activist groups collecting and understanding data, whether they’re working on climate change or the rights of patients, voters, the poor, LGBT people, or the wrongly imprisoned…
The bad news is that today advocacy and scholarship both face serious threats. As for social activism, while the Internet has made it cheaper and easier than ever to organize and agitate, it also produces distraction and false senses of success. People tweet, blog, post messages on walls, and sign online petitions, thinking somehow that noise is change. Meanwhile, the people in power just wait it out, knowing that the attention deficit caused by Internet overload will mean the mob will move on to the next house tomorrow, sure as the sun comes up in the morning. And the economic collapse of the investigative press caused by that noisy Internet means no one on the outside will follow through to sort it out, to tell us what is real and what is illusory. The press is no longer around to investigate, spread stories beyond the already aware, and put pressure on those in power to do the right thing.
The threats to scholars, meanwhile, are enormous and growing. Today over half of American university faculty are in non-tenure-track jobs. (Most have not consciously chosen to live without tenure, as I have.) Not only are these people easy to get rid of if they make trouble, but they are also typically loaded with enough teaching and committee work to make original scholarship almost impossible… Add to this the often unfair Internet-based attacks on researchers who are perceived as promoting dangerous messages, and what you end up with is regression to the safe — a recipe for service of those already in power.
Perhaps most troubling is the tendency within some branches of the humanities to portray scholarly quests to understand reality as quaint or naive, even colonialist and dangerous. Sure, I know: Objectivity is easily desired and impossible to perfectly achieve, and some forms of scholarship will feed oppression, but to treat those who seek a more objective understanding of a problem as fools or de facto criminals is to betray the very idea of an academy of learners. When I run into such academics — people who will ignore and, if necessary, outright reject any fact that might challenge their ideology, who declare scientific methodologies “just another way of knowing” — I feel this crazy desire to institute a purge… Call me ideological for wanting us all to share a belief in the importance of seeking reliable, verifiable knowledge, but surely that is supposed to be the common value of the learned.
…I want to say to activists: If you want justice, support the search for truth. Engage in searches for truth. If you really want meaningful progress and not just temporary self-righteousness, carpe datum. You can begin with principles, yes, but to pursue a principle effectively, you have to know if your route will lead to your destination. If you must criticize scholars whose work challenges yours, do so on the evidence, not by poisoning the land on which we all live.
…Here’s the one thing I now know for sure after this very long trip: Evidence really is an ethical issue, the most important ethical issue in a modern democracy. If you want justice, you must work for truth.
Naturally, the sermon is more potent if you’ve read the case studies in the book.
From Segerstrale (2000), p. 27:
In 1984 I was able to shock my class of well-intended liberal students at Smith College by giving them the assignment to compare [Stephan] Chorover’s [critical] representation of passages of [E.O. Wilson’s] Sociobiology with Wilson’s original text. The students, who were deeply suspicious of Wilson and spontaneous champions of his critics, embarked on this homework with gusto. Many students were quite dismayed at their own findings and angry with Chorover. This surely says something, too, about these educated laymen’s relative innocence regarding what can and cannot be done in academia.
I wish this kind of exercise was more common. Another I would suggest is to compare critics’ representations of Dreyfus’ “Alchemy and Artificial Intelligence” with the original text (see here).
The earliest introductory AI textbook I know about — excluding mere “paper collections” like Computers and Thought (1963) — is Jackson’s Introduction to Artificial Intelligence (1974).
It discusses AGI and the control problem starting on page 394:
If [AI] research is unsuccessful at producing a general artificial intelligence, over a period of more than a hundred years, then its failure may raise some serious doubt among many scientists as to the finite describability of man and his universe. However, the evidence presented in this book makes it seem likely that artificial intelligence research will be successful, that a technology will be developed which is capable of producing machines that can demonstrate most, if not all, of the mental abilities of human beings. Let us therefore assume that this will happen, and imagine two worlds that might result.
[First,] …It is not difficult to envision actualities in which an artificial intelligence would exert control over human beings, yet be out of their control.
Given that intelligent machines are to be used, the question of their control and noncontrol must be answered. If a machine is programmed to seek certain ends, how are we to insure that the means it chooses to employ are agreeable to people? A preliminary solution to the problem is given by the fact that we can specify state-space problems to require that their solution paths shall not pass through certain states (see Chapter 3). However, the task of giving machines more sophisticated value systems, and especially of making them ‘ethical,’ has not yet been investigated by AI researchers…
The question of control should be coupled with the ‘lack of understanding’ question; that is, the possibility exists that intelligent machines might be too complicated for us to understand in situations that require real-time analyses (see the discussion of evolutionary programs in Chapter 8). We could conceivably always demand that a machine give a complete output of its reasoning on a problem; nevertheless that reasoning might not be effectively understandable to us if the problem itself were to determine a time limit for producing a solution. In such a case, if we were to act rationally, we might have to follow the machine’s advice without understanding its ‘motives’…
It has been suggested that an intelligent machine might arise accidentally, without our knowledge, through some fortuitous interconnection of smaller machines (see Heinlein, 1966). If the smaller machines each helped to control some aspect of our economy or defense, the accidental intelligent might well act as a dictator… It seems highly unlikely that this will happen, especially if we devote sufficient time to studying the non-accidental systems we implement.
A more significant danger is that artificial intelligence might be used to further the interests of human dictators. A limited supply of intelligent machines in the hands of a human dictator might greatly increase his power over other human beings, perhaps to the extent of giving him complete censorship and supervision of the public…
Let us now paint another, more positive picture of the world that might result from artificial intelligence research… It is a world in which man and his machines have reached a state of symbiosis…
The benefits humanity might gain from achieving such a symbiosis are enormous. As mentioned [earlier], it may be possible for artificial intelligence to greatly reduce the amount of human labor necessary to operate the economy of the world… Computers and AI research may play an important part in helping to overcome the food, population, housing, and other crises that currently grip the earth… Artificial intelligence may eventually be used to… partially automated the development of science itself… Perhaps artificial intelligence will someday be used in automatic teachers… and perhaps mechanical translators will someday be developed which will fluently translate human languages. And (very perhaps) the day may eventually come when the ‘household robot’ and the ‘robot chauffeur’ will be a reality…
In some ways it is reassuring that the progress in artificial intelligence research is proceeding at a relatively slow but regular pace. It should be at least a decade before any of these possibilities becomes an actuality, which will give us some time to consider in more detail the issues involved.
From AI scientist Drew McDermott, in 1976:
In this paper I have criticized AI researchers very harshly. Let me express my faith that people in other fields would, on inspection, be found to suffer from equally bad faults. Most AI workers are responsible people who are aware of the pitfalls of a difficult field and produce good work in spite of them. However, to say anything good about anyone is beyond the scope of this paper.
From the preface to his in-progress history of avant-garde music:
Art Music (or Sound Art) differs from Commercial Music the way a Monet painting differs from IKEA furniture. Although the border is frequently fuzzy, there are obvious differences in the lifestyles and careers of the practitioners. Given that Art Music represents (at best) 3% of all music revenues, the question is why anyone would want to be an art musician at all. It is like asking why anyone would want to be a scientist instead of joining a technology startup. There are pros that are not obvious if one only looks at the macroscopic numbers. To start with, not many commercial musicians benefit from that potentially very lucrative market. In fact, the vast majority live a rather miserable existence. Secondly, commercial music frequently implies a lifestyle of time-consuming gigs in unattractive establishments. But fundamentally being an art musician is a different kind of job, more similar to the job of the scientific laboratory researcher (and of the old-fashioned inventor) than to the job of the popular entertainer. The art musician is pursuing a research program that will be appreciated mainly by his peers and by the “critics” (who function as historians of music), not by the public. The art musician is not a product to be sold in supermarkets but an auteur. The goal of an art musician is, first and foremost, to do what s/he feels is important and, secondly, to secure a place in the history of human civilization. Commercial musicians live to earn a good life. Art musicians live to earn immortality. (Ironically, now that we entered the age of the mass market, a pop star may be more likely to earn immortality than the next Beethoven, but that’s another story). Art music knows no stylistic boundaries: the division in classical, jazz, rock, hip hop and so forth still makes sense for commercial music (it basically identifies the sales channel) but ever less sense for art music whose production, distribution and appreciation methods are roughly the same regardless of whether the musician studied in a Conservatory, practiced in a loft or recorded at home using a laptop.
From Mushak & Elliott (2015):
Pharmaceutical companies hire “medical education and communication companies” (MECCs) to create sets of journal articles (and even new journals) designed to place their drugs in a favorable light and to assist in their marketing efforts (Sismondo 2007, 2009; Elliott 2010). These articles are frequently submitted to journals under the names of prominent academic researchers, but the articles are actually written by employees of the MECCs (Sismondo 2007, 2009). While it is obviously difficult to determine what proportion of the medical literature is produced in this fashion, one study used information uncovered in litigation to determine that more than half of the articles published on the antidepressant Zoloft between 1998 and 2000 were ghostwritten (Healy and Cattell 2003). These articles were published in more prestigious journals than the non-ghostwritten articles and were cited five times more often. Significantly, they also painted a rosier picture of Zoloft than the others.
The reason this book [Superintelligence]… has stuck with me is because I have found my mind changed on this topic, somewhat against my will.
…For almost all of my life… I would’ve placed myself very strongly in the camp of techno-optimists. More technology, faster… it’s nothing but sunshine and rainbows ahead… When people would talk about the “rise of the machines”… I was always very dismissive of this, in no small part because those movies are ridiculous… [and] I was never convinced there was any kind of problem here.
But [Superintelligence] changed my mind so that I am now much more in the camp of [thinking that the development of general-purpose AI] can seriously present an existential threat to humanity, in the same way that an asteroid collision… is what you’d classify as a serious existential threat to humanity — like, it’s just over for people.
…I keep thinking about this because I’m uncomfortable with having this opinion. Like, sometimes your mind changes and you don’t want it to change, and I feel like “Boy, I liked it much better when I just thought that the future was always going to be great and there’s not any kind of problem”…
…The thing about this book that I found really convincing is that it used no metaphors at all. It was one of these books which laid out its basic assumptions, and then just follows them through to a conclusion… The book is just very thorough at trying to go down every path and every combination of [assumptions], and what I realized was… “Oh, I just never did sit down and think through this position [that it will eventually be possible to build general-purpose AI] to its logical conclusion.”
Another interesting section begins at 1:46:35 and runs through about 1:52:00.
From Arbesman’s The Half-Life of Facts:
One of the strangest examples of the spread of error is related to an article in the British Medical Journal from 1981. In it, the immunohematologist Terry Hamblin discusses incorrect medical information, including a wonderful story about spinach. He details how, due to a typo, the amount of iron in spinach was thought to be ten times higher than it actually is. While there are only 3.5 milligrams of iron in a 100-gram serving of spinach, the accepted fact became that spinach contained 35 milligrams of iron. Hamblin argues that German scientists debunked this in the 1930s, but the misinformation continued to spread far and wide.
According to Hamblin, the spread of this mistake even led to spinach becoming Popeye the Sailor’s food choice. When Popeye was created, it was recommended he eat spinach for his strength, due to its vaunted iron-based health properties.
This wonderful case of a typo that led to so much incorrect thinking was taken up for decades as a delightful, and somewhat paradigmatic, example of how wrong information could spread widely. The trouble is, the story itself isn’t correct.
While the amount of iron in spinach did seem to be incorrectly reported in the nineteenth century, it was likely due to a confusion between iron oxide—a related chemical—and iron, or contamination in the experiments, rather than a typographical error. The error was corrected relatively rapidly, over the course of years, rather than over many decades.
Mike Sutton, a reader in criminology at Nottingham Trent University, debunked the entire original story several years ago through a careful examination of the literature. He even discovered that Popeye seems to have eaten spinach not for its supposed high quantities of iron, but rather due to vitamin A. While the truth behind the myth is still being excavated, this misinformation — the myth of the error — from over thirty years ago continues to spread.
Geoff Hinton on a show called The Agenda (starting around 9:40):
Interviewer: How many years away do you think we are from a neural network being able to do anything that a brain can do?
Hinton: …I don’t think it will happen in the next five years but beyond that it’s all a kind of fog.
Interviewer: Is there anything about this that makes you nervous?
Hinton: In the very long run, yes. I mean obviously having… [AIs] more intelligent than us is something to be nervous about. It’s not gonna happen for a long time but it is something to be nervous about.
Interviewer: What aspect of it makes you nervous?
Hinton: Will they be nice to us?
On the latest episode of The Ezra Klein Show, Bill Gates elaborated a bit on his views about AI timelines (starting around 24:40):
Klein: I know you take… the risk of creating artificial intelligence that… ends up turning against us pretty seriously. I’m curious where you think we are in terms of creating an artificial intelligence…
Gates: Well, with robotics you have to think of three different milestones.
One is… not-highly-trained labor substitution. Driving, security guard, warehouse work, waiter, maid — things that are largely visual and physical manipulation… [for] that threshold I don’t think you’d get much disagreement that over the next 15 years that the robotic equivalents in terms of cost [and] reliability will become a substitute to those activities…
Then there’s the point at which what we think of as intelligent activities, like writing contracts or doing diagnosis or writing software code, when will the computer start to… have the capacity to work in those areas? There you’d get more disagreement… some would say 30 years, I’d be there. Some would say 60 years. Some might not even see that [happening].
Then there’s a third threshold where the intelligence involved is dramatically better than humanity as a whole, what Bostrom called a “superintelligence.” There you’re gonna get a huge range of views including people who say it won’t ever happen. Ray Kurzweil says it will happen at midnight on July 13, 2045 or something like that and that it’ll all be good. Then you have other people who say it can never happen. Then… there’s a group that I’m more among where you say… we’re not able to predict it, but it’s something that should start thinking about. We shouldn’t restrict activities or slow things down… [but] the potential that that exists even in a 50-year timeframe [means] it’s something to be taken seriously.
But those are different thresholds, and the responses are different.
See Gates’ previous comments on AI timelines and AI risk, here.
UPDATE 07/01/2016: In this video, Gates says that achieving “human-level” AI will take “at least 5 times as long as what Ray Kurzweil says.”
Interviewer: There’s a part of [OpenAI’s introductory blog post] that I found particularly interesting, which says “It’s hard to fathom It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.” So what are the reasonable questions that we should be thinking about in terms of safety now? …
Sutskever: … I think, and many people think, that full human-level AI … might perhaps be invented in some number of decades … [and] will obviously have a huge, inconceivable impact on society. That’s obvious. And when a technology will predictably have as much impact, there is nothing to lose from starting to think about the nature of this impact … and also whether there is any research that can be done today that will make this impact be more like the kind of impact we want.
The question of safety really boils down to this: …If you look at our neural networks that for example recognize images, they’re doing a pretty good job but once in a while they make errors [and it’s] hard to understand where they come from.
For example I use Google photo search to index my own photos… and it’s really accurate almost all the time, but sometimes I’ll search for a photo of a dog, let’s say, and it will find a photo [that is] clearly not a dog. Why does it make this mistake? You could say “Who cares? It’s just object recognition,” and I agree. But if you look down the line, what you’ll see is that right now we are [just beginning to] create agents, for example the Atari work of DeepMind or the robotics work of Berkeley, where you’re building a neural network that learns to control something which interacts with the world. At present, their cost functions [i.e. goal functions] are manually specified. But it… seems likely that eventually we will be building robots whose cost functions will be learned from demonstration, or from watching a YouTube video, or from the interpretation of natural text…
So now you have these really complicated cost functions that are difficult to understand, and you have a physical robot or some kind of software system which tries to optimize this cost function, and I think these are the kinds of scenarios that could be relevant for AI safety questions. Once you have a system like this, what do you need to do to be reasonably certain that it will do what you want it to do?
…because we don’t work on such systems [today], these questions may seem a bit premature, but once we start building reinforcement learning systems [which] do learn the cost function, I think this question will become much more sharply in focus. Of course it would also be nice to do theoretical research, but it’s not clear to me how it could be done.
Interviewer: So right now we have the opportunity to understand the fundamentals… and then apply them later as the research continues and grows and is able to create more powerful systems?
Sutskever: That would be the ideal case, definitely. I think it’s worth trying to do that. I think it may also be hard to do because it seems like we have such a hard time imagining [what] these future systems will look like. We can speak in general terms: Yes, there will be a cost function most likely. But how, exactly, will it be optimized? It’s a little hard to predict because if you could predict it we could just go ahead and build the systems already.
Teles (2010) quotes Douglas Baird, a Stanford Law student in the 70s and later dean of the University of Chicago Law School:
In the early seventies, people like Posner would come in and spend six weeks studying family law, and they’d write a couple of articles explaining why everything everyone was saying in family law was 100 percent wrong [because they’d ignored economics].1 And then the replies would be, “No, we were only 80 percent wrong.” And Posner never got things exactly right, but he always turned everything upside down, and people talked about law differently… By the time I came along, and I wasn’t trained as economist, it was clear that… doing great work was easy… I used to say that this was just like knocking over Coke bottles with a baseball bat… You could just go in and write something revolutionary and go in tomorrow and write another article. I remember writing articles where the time between getting the idea and getting it accepted from a major law review was four days. I’m not Richard Posner, and few of us are. I got out of law school, and I was interested in bankruptcy law, which was inhabited by intellectual midgets… It was a complete intellectual wasteland. I got tenure by saying, “Jeez, a dollar today is worth more than a dollar tomorrow.” You got tenure for that! The reality is that there was just an open field begging for people to do great work.
Eric Lander tells his version of the story. Here is his take — which might or might not be reasonable — on lessons learned from the story:
The most important [lesson] is that medical breakthroughs often emerge from completely unpredictable origins. The early heroes of CRISPR were not on a quest to edit the human genome—or even to study human disease. Their motivations were a mix of personal curiosity (to understand bizarre repeat sequences in salt-tolerant microbes), military exigency (to defend against biological warfare), and industrial application (to improve yogurt production).
The history also illustrates the growing role in biology of “hypothesis-free” discovery based on big data. The discovery of the CRISPR loci, their biological function, and the tracrRNA all emerged not from wet-bench experiments but from open-ended bioinformatic exploration of large-scale, often public, genomic datasets. “Hypothesis-driven” science of course remains essential, but the 21st century will see an increasing partnership between these two approaches.
It is instructive that so many of the Heroes of CRISPR did their seminal work near the very start of their scientific careers (including Mojica, Horvath, Marraffini, Charpentier, Vogel, and Zhang)—in several cases, before the age of 30. With youth often comes a willingness to take risks—on uncharted directions and seemingly obscure questions—and a drive to succeed. It’s an important reminder at a time that the median age for first grants from the NIH has crept up to 42.
Notably, too, many did their landmark work in places that some might regard as off the beaten path of science (Alicante, Spain; France’s Ministry of Defense; Danisco’s corporate labs; and Vilnius, Lithuania). And, their seminal papers were often rejected by leading journals—appearing only after considerable delay and in less prominent venues. These observations may not be a coincidence: the settings may have afforded greater freedom to pursue less trendy topics but less support about how to overcome skepticism by journals and reviewers.
Finally, the narrative underscores that scientific breakthroughs are rarely eureka moments. They are typically ensemble acts, played out over a decade or more, in which the cast becomes part of something greater than what any one of them could do alone.
Warning: some people on Twitter are saying this article is basically PR for Lander’s Broad Institute, where Feng Zhang did his CRISPR work. Zhang is currently in a patent dispute over CRISPR with Jennifer Doudna.
[One problem] is that industry-sponsored trials are more likely to show a beneficial effect than non-industry funded trials [261,536–540]… This bias can have paradoxical consequences. For example, Heres et al.  examined randomized trials that compared different antipsychotic medications. They found that olanzapine beat risperidone, risperidone beat quetiapine, and quetiapine beat olanzapine! The relative success of the drugs was directly related to who sponsored the trial. For example, if the manufacturers of risperidone sponsored the trial, then risperidone was more likely to appear more effective than the others.
The reference is Heres et al. (2006).
Archie Cochrane (who inspired the creation of the Cochrane Collaboration) explained what happened when he reported the preliminary results of a trial that compared home versus hospital treatment for varicose veins. The Medical Research Council gave its ethical approval, but cardiologists in the planned location of the trial (Cardiff) refused to take part because they were certain, based on their expertise, that hospital treatment was far superior…
Eventually Cochrane succeeded at beginning the trial in Bristol. Six months into the trial, the ethics committee called on Cochrane to compile and report on the preliminary results. At that stage, home care showed a slight but not statistically significant benefit. Cochrane, however, decided to play a trick on his colleagues: he prepared two reports, one with the actual number of deaths, and one with the number reversed. The rest of the story is best told from Cochrane’s perspective:
“As we were going into the committee, in the anteroom, I showed some cardiologists the results. They were vociferous in their abuse: ‘Archie,’ they said, ‘we always thought you were unethical. You must stop the trial at once.’ I let them have their way for some time and then apologised and gave them the true results, challenging them to say, as vehemently, that coronary care units should be stopped immediately. There was dead silence…”
In the following passage, Coyne et al. (2010) repeatedly cite Authentic Happiness, by positive psychology co-founder and past APA president Martin Seligman, as an example of what they’re saying is wrong with positive psychology:
Critical discussions of the potential contributions of a positive psychology have been hampered by the sloganeering of the leaders of the movement and their labeling of the alternative as a “negative psychology”…
The ridiculing of pessimists as losers in positive psycholgy self-help books, money back guarantees on websites offering personal coaches and self-help techniques claiming to promote happiness, and the presentation of pseudoscientific happiness regression equations [Happines = Set range + Circumstances + Factors under voluntary control] all… suggest that, while the leaders of positive psychology claim it to be science based, they feel free to deliver platonic noble lies to the unwashed masses…
…support for such victim blaming can come not only from the fringe, but from mainstream positive psychology. Anyone who doubts this need only to Google “positive psychology” and “coaching” and experiment by adding some names of proponents of mainstream positive psychology. They will soon be brought to websites with claims that retaining a personal coach or engaging in web-based exercises for a substantial fee is guaranteed to instill happines that lasts and that happiness is related to health. More efficiently, the skeptical reader can reach websites with similar claims by simply joining the American Psychological Association listserv Friends of Positive Psychology… and by double clicking on the web links provided in the signatures of posters there.
From Epic Measures:
Of the 2 billion deaths since 1970 the new Global Burden would ultimately cover, only about 25 percent had been recorded in a vital registration system accessible to researchers… [Christopher Murray’s] proposal to the Gates Foundation had said the entire project would take three years to complete, giving a deadline of July 2010. Three years to gather and analyze all available details about the health of every person on Earth…
Different countries brought a varied set of challenges. In China, regulations forbade almost all core health data from leaving the country, so Chinese partners had to do analyses and share the results with Seattle. U.S. states, by contrast, sold annual databases of their in-patient hospital users to anyone in the world, for prices ranging from $35 to $2,000. In Ghana, almost the exact equivalent records were available free.
In Nigeria, Africa’s largest country by population, the data indexers surveyed hospitals, police stations, health clinics, libraries, colonial archives, and even cemetery plot records. In Libya, the latest census and civil registries turned out to be available online, but only after clicking through seven Web pages written in Arabic. In Iraq, during the end of the American-led occupation, months of spadework revealed the existence of two recent government household surveys. These would help estimate how many Iraqis were being killed or injured by war, as opposed to other causes, a hugely disputed topic. Trying e-mail, Skype, and phone, Speyer finally managed to reach the Iraqi official in charge of statistics and information technology. “She said they’d be happy to share the survey microdata with us, and I said, ‘Can you e-mail it or upload it to a website?’” he recalls. “She said no. She burned it onto a CD and told me I had to pick that up in the Baghdad Green Zone.”
…Another completely separate stream of information, and a big one, came from others’ published scientific studies. About what? About “health.” There were ten thousand articles a month published with a reference to epidemiology. To the maximum degree possible, Murray wanted all of those results pulled, digitized, and entered into Global Burden, too. Put another way, a fraction of a fraction of the data supplied to the study’s scientists was to be everything everyone else had ever discovered.
…in the summer of 1974, I set about to do battle with Hans Eysenck and prove that psychotherapy – my psychotherapy – was an effective treatment. I joined the battle with Eysenck’s (1965) review of the psychotherapy outcome literature. Eysenck began his famous reviews by eliminating from consideration all theses, dissertations, project reports or other contemptible items not published in peer-reviewed journals. This arbitrary exclusion of literally hundreds of evaluations of therapy outcomes was indefensible to my mind. It’s one thing to believe that peer review guarantees truth; it is quite another to believe that all truth appears in peer-reviewed journals.
Next, Eysenck eliminated any experiment that did not include an untreated control group. This makes no sense whatever, because head-to-head comparisons of two different types of psychotherapy contribute a great deal to our knowledge of psychotherapy effects…
Having winnowed a huge literature down to 11 studies (!) by whim and prejudice, Eysenck proceeded to describe their findings solely in terms of whether or not statistical significance was reached at the .05 level…
Finally, Eysenck did something truly staggering in its illogic. If a study showed significant differences favoring therapy over control on what he regarded as a ‘subjective’ measure of outcome (e.g., the Rorschach or the Thematic Apperception Test), he discounted the findings entirely. So be it; he may be a tough judge, but that’s his right. But then, when encountering a study that showed differences on an ‘objective’ outcome measure (e.g., grade-point average) but no differences on a subjective measure (such as the Thematic Apperception Test), Eysenck discounted the entire study because the outcome differences were ‘inconsistent’.
Looking back on it, I can almost credit Eysenck with the invention of meta-analysis by anti-thesis. By doing everything in the opposite way that he did, one would have been led straight to meta-analysis. Adopt an a posteriori attitude toward including studies in a synthesis, replace statistical significance by measures of strength of relationship or effect, and view the entire task of integration as a problem in data analysis where ‘studies’ are quantified and the resulting database subjected to statistical analysis, and meta-analysis assumes its first formulation. Thank you, Professor Eysenck.
…[Our] first meta-analysis of the psychotherapy outcome research finished in 1974-1975 found that the typical therapy trial raised the treatment group to a level about two-thirds of a standard deviation on average above the average of untreated controls…
…[Researchers’] reactions [to the meta-analysis] foreshadowed the eventual reception of the work among psychologists. Some said that the work was revolutionary and proved what they had known all along; others said it was wrongheaded and meaningless. The widest publication of the work came in 1977, in an article by Mary Lee Smith and myself in the American Psychologist. Eysenck responded to the article by calling it ‘mega-silliness’…
From Other Classical Musics: Fifteen Great Traditions, on what the book means by “classical musics” and why jazz is one of them:
The term ‘art music’ is too broad… ‘Court music’ would have worked for some traditions, but not for all; ‘classical’ is the adjective best capable of covering what every society regards as its own Great Tradition…
According to our rule-of-thumb, a classical music will have evolved in a political-economic environment with built-in continuity… where a wealthy class of connoisseurs has stimulated its creation by a quasi-priesthood of professionals; it will have enjoyed high social esteem. It will also have had the time and space to develop rules of composition and performance, and to allow the evolution of a canon of works, or forms… almost all classical music has vernacular roots, and periodically renews itself from them;
…As a newish nation whose dominant culture is essentially European, America has – like Australasia – imported Europe’s classical music, but in jazz it has its own indigenous classical form. Those in doubt as to whether jazz belongs in this book should bear in mind that its controlled-improvisatory nature aligns it with almost all other classical musics. Doubters might also consider how closely jazz’s historical trajectory mirrors that of European music, if telescoped into a much shorter time. It too has vernacular roots, and was raised by a series of master-musicians to the status of an art-music; it too has evolved via a ‘classical’ period through a succession of modernist phases, and has become every bit as esoteric as European classical modernism. Since the 1950s jazz has had its own early-music revivalists (from trad bands to Wynton Marsalis) and, again like Western classical music, it too seems unsure where to go next. And now that it’s gone native on every continent, jazz is as global as Beethoven.