Scaruffi on art music

From the preface to his in-progress history of avant-garde music:

Art Music (or Sound Art) differs from Commercial Music the way a Monet painting differs from IKEA furniture. Although the border is frequently fuzzy, there are obvious differences in the lifestyles and careers of the practitioners. Given that Art Music represents (at best) 3% of all music revenues, the question is why anyone would want to be an art musician at all. It is like asking why anyone would want to be a scientist instead of joining a technology startup. There are pros that are not obvious if one only looks at the macroscopic numbers. To start with, not many commercial musicians benefit from that potentially very lucrative market. In fact, the vast majority live a rather miserable existence. Secondly, commercial music frequently implies a lifestyle of time-consuming gigs in unattractive establishments. But fundamentally being an art musician is a different kind of job, more similar to the job of the scientific laboratory researcher (and of the old-fashioned inventor) than to the job of the popular entertainer. The art musician is pursuing a research program that will be appreciated mainly by his peers and by the “critics” (who function as historians of music), not by the public. The art musician is not a product to be sold in supermarkets but an auteur. The goal of an art musician is, first and foremost, to do what s/he feels is important and, secondly, to secure a place in the history of human civilization. Commercial musicians live to earn a good life. Art musicians live to earn immortality. (Ironically, now that we entered the age of the mass market, a pop star may be more likely to earn immortality than the next Beethoven, but that’s another story). Art music knows no stylistic boundaries: the division in classical, jazz, rock, hip hop and so forth still makes sense for commercial music (it basically identifies the sales channel) but ever less sense for art music whose production, distribution and appreciation methods are roughly the same regardless of whether the musician studied in a Conservatory, practiced in a loft or recorded at home using a laptop.

Medical ghostwriting

From Mushak & Elliott (2015):

Pharmaceutical companies hire “medical education and communication companies” (MECCs) to create sets of journal articles (and even new journals) designed to place their drugs in a favorable light and to assist in their marketing efforts (Sismondo 2007, 2009; Elliott 2010). These articles are frequently submitted to journals under the names of prominent academic researchers, but the articles are actually written by employees of the MECCs (Sismondo 2007, 2009). While it is obviously difficult to determine what proportion of the medical literature is produced in this fashion, one study used information uncovered in litigation to determine that more than half of the articles published on the antidepressant Zoloft between 1998 and 2000 were ghostwritten (Healy and Cattell 2003). These articles were published in more prestigious journals than the non-ghostwritten articles and were cited five times more often. Significantly, they also painted a rosier picture of Zoloft than the others.

CGP Grey on Superintelligence

CGP Grey recommends Nick Bostrom’s Superintelligence:

The reason this book [Superintelligence]… has stuck with me is because I have found my mind changed on this topic, somewhat against my will.

…For almost all of my life… I would’ve placed myself very strongly in the camp of techno-optimists. More technology, faster… it’s nothing but sunshine and rainbows ahead… When people would talk about the “rise of the machines”… I was always very dismissive of this, in no small part because those movies are ridiculous… [and] I was never convinced there was any kind of problem here.

But [Superintelligence] changed my mind so that I am now much more in the camp of [thinking that the development of general-purpose AI] can seriously present an existential threat to humanity, in the same way that an asteroid collision… is what you’d classify as a serious existential threat to humanity — like, it’s just over for people.

…I keep thinking about this because I’m uncomfortable with having this opinion. Like, sometimes your mind changes and you don’t want it to change, and I feel like “Boy, I liked it much better when I just thought that the future was always going to be great and there’s not any kind of problem”…

…The thing about this book that I found really convincing is that it used no metaphors at all. It was one of these books which laid out its basic assumptions, and then just follows them through to a conclusion… The book is just very thorough at trying to go down every path and every combination of [assumptions], and what I realized was… “Oh, I just never did sit down and think through this position [that it will eventually be possible to build general-purpose AI] to its logical conclusion.”

Another interesting section begins at 1:46:35 and runs through about 1:52:00.

The silly history of spinach

From Arbesman’s The Half-Life of Facts:

One of the strangest examples of the spread of error is related to an article in the British Medical Journal from 1981. In it, the immunohematologist Terry Hamblin discusses incorrect medical information, including a wonderful story about spinach. He details how, due to a typo, the amount of iron in spinach was thought to be ten times higher than it actually is. While there are only 3.5 milligrams of iron in a 100-gram serving of spinach, the accepted fact became that spinach contained 35 milligrams of iron. Hamblin argues that German scientists debunked this in the 1930s, but the misinformation continued to spread far and wide.

According to Hamblin, the spread of this mistake even led to spinach becoming Popeye the Sailor’s food choice. When Popeye was created, it was recommended he eat spinach for his strength, due to its vaunted iron-based health properties.

This wonderful case of a typo that led to so much incorrect thinking was taken up for decades as a delightful, and somewhat paradigmatic, example of how wrong information could spread widely. The trouble is, the story itself isn’t correct.

While the amount of iron in spinach did seem to be incorrectly reported in the nineteenth century, it was likely due to a confusion between iron oxide—a related chemical—and iron, or contamination in the experiments, rather than a typographical error. The error was corrected relatively rapidly, over the course of years, rather than over many decades.

Mike Sutton, a reader in criminology at Nottingham Trent University, debunked the entire original story several years ago through a careful examination of the literature. He even discovered that Popeye seems to have eaten spinach not for its supposed high quantities of iron, but rather due to vitamin A. While the truth behind the myth is still being excavated, this misinformation — the myth of the error — from over thirty years ago continues to spread.

Geoff Hinton on long-term AI outcomes

Geoff Hinton on a show called The Agenda (starting around 9:40):

Interviewer: How many years away do you think we are from a neural network being able to do anything that a brain can do?

Hinton: …I don’t think it will happen in the next five years but beyond that it’s all a kind of fog.

Interviewer: Is there anything about this that makes you nervous?

Hinton: In the very long run, yes. I mean obviously having… [AIs] more intelligent than us is something to be nervous about. It’s not gonna happen for a long time but it is something to be nervous about.

Interviewer: What aspect of it makes you nervous?

Hinton: Will they be nice to us?

Bill Gates on AI timelines

On the latest episode of The Ezra Klein Show, Bill Gates elaborated a bit on his views about AI timelines (starting around 24:40):

Klein: I know you take… the risk of creating artificial intelligence that… ends up turning against us pretty seriously. I’m curious where you think we are in terms of creating an artificial intelligence…

Gates: Well, with robotics you have to think of three different milestones.

One is… not-highly-trained labor substitution. Driving, security guard, warehouse work, waiter, maid — things that are largely visual and physical manipulation… [for] that threshold I don’t think you’d get much disagreement that over the next 15 years that the robotic equivalents in terms of cost [and] reliability will become a substitute to those activities…

Then there’s the point at which what we think of as intelligent activities, like writing contracts or doing diagnosis or writing software code, when will the computer start to… have the capacity to work in those areas? There you’d get more disagreement… some would say 30 years, I’d be there. Some would say 60 years. Some might not even see that [happening].

Then there’s a third threshold where the intelligence involved is dramatically better than humanity as a whole, what Bostrom called a “superintelligence.” There you’re gonna get a huge range of views including people who say it won’t ever happen. Ray Kurzweil says it will happen at midnight on July 13, 2045 or something like that and that it’ll all be good. Then you have other people who say it can never happen. Then… there’s a group that I’m more among where you say… we’re not able to predict it, but it’s something that should start thinking about. We shouldn’t restrict activities or slow things down… [but] the potential that that exists even in a 50-year timeframe [means] it’s something to be taken seriously.

But those are different thresholds, and the responses are different.

See Gates’ previous comments on AI timelines and AI risk, here.

UPDATE 07/01/2016: In this video, Gates says that achieving “human-level” AI will take “at least 5 times as long as what Ray Kurzweil says.”

Sutskever on Talking Machines

The latest episode of Talking Machines features an interview with Ilya Sutskever, the research director at OpenAI. His comments on long-term AI safety in particular were (starting around 28:10):

Interviewer: There’s a part of [OpenAI’s introductory blog post] that I found particularly interesting, which says “It’s hard to fathom It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.” So what are the reasonable questions that we should be thinking about in terms of safety now? …

Sutskever: … I think, and many people think, that full human-level AI … might perhaps be invented in some number of decades … [and] will obviously have a huge, inconceivable impact on society. That’s obvious. And when a technology will predictably have as much impact, there is nothing to lose from starting to think about the nature of this impact … and also whether there is any research that can be done today that will make this impact be more like the kind of impact we want.

The question of safety really boils down to this: …If you look at our neural networks that for example recognize images, they’re doing a pretty good job but once in a while they make errors [and it’s] hard to understand where they come from.

For example I use Google photo search to index my own photos… and it’s really accurate almost all the time, but sometimes I’ll search for a photo of a dog, let’s say, and it will find a photo [that is] clearly not a dog. Why does it make this mistake? You could say “Who cares? It’s just object recognition,” and I agree. But if you look down the line, what you’ll see is that right now we are [just beginning to] create agents, for example the Atari work of DeepMind or the robotics work of Berkeley, where you’re building a neural network that learns to control something which interacts with the world. At present, their cost functions [i.e. goal functions] are manually specified. But it… seems likely that eventually we will be building robots whose cost functions will be learned from demonstration, or from watching a YouTube video, or from the interpretation of natural text…

So now you have these really complicated cost functions that are difficult to understand, and you have a physical robot or some kind of software system which tries to optimize this cost function, and I think these are the kinds of scenarios that could be relevant for AI safety questions. Once you have a system like this, what do you need to do to be reasonably certain that it will do what you want it to do?

…because we don’t work on such systems [today], these questions may seem a bit premature, but once we start building reinforcement learning systems [which] do learn the cost function, I think this question will become much more sharply in focus. Of course it would also be nice to do theoretical research, but it’s not clear to me how it could be done.

Interviewer: So right now we have the opportunity to understand the fundamentals… and then apply them later as the research continues and grows and is able to create more powerful systems?

Sutskever: That would be the ideal case, definitely. I think it’s worth trying to do that. I think it may also be hard to do because it seems like we have such a hard time imagining [what] these future systems will look like. We can speak in general terms: Yes, there will be a cost function most likely. But how, exactly, will it be optimized? It’s a little hard to predict because if you could predict it we could just go ahead and build the systems already.

Applying economics to the law for the first time

Teles (2010) quotes Douglas Baird, a Stanford Law student in the 70s and later dean of the University of Chicago Law School:

In the early seventies, people like Posner would come in and spend six weeks studying family law, and they’d write a couple of articles explaining why everything everyone was saying in family law was 100 percent wrong [because they’d ignored economics]. And then the replies would be, “No, we were only 80 percent wrong.” And Posner never got things exactly right, but he always turned everything upside down, and people talked about law differently… By the time I came along, and I wasn’t trained as economist, it was clear that… doing great work was easy… I used to say that this was just like knocking over Coke bottles with a baseball bat… You could just go in and write something revolutionary and go in tomorrow and write another article. I remember writing articles where the time between getting the idea and getting it accepted from a major law review was four days. I’m not Richard Posner, and few of us are. I got out of law school, and I was interested in bankruptcy law, which was inhabited by intellectual midgets… It was a complete intellectual wasteland. I got tenure by saying, “Jeez, a dollar today is worth more than a dollar tomorrow.” You got tenure for that! The reality is that there was just an open field begging for people to do great work.

 

Discovering CRISPR

Eric Lander tells his version of the story. Here is his take — which might or might not be reasonable — on lessons learned from the story:

The most important [lesson] is that medical breakthroughs often emerge from completely unpredictable origins. The early heroes of CRISPR were not on a quest to edit the human genome—or even to study human disease. Their motivations were a mix of personal curiosity (to understand bizarre repeat sequences in salt-tolerant microbes), military exigency (to defend against biological warfare), and industrial application (to improve yogurt production).

The history also illustrates the growing role in biology of “hypothesis-free” discovery based on big data. The discovery of the CRISPR loci, their biological function, and the tracrRNA all emerged not from wet-bench experiments but from open-ended bioinformatic exploration of large-scale, often public, genomic datasets. “Hypothesis-driven” science of course remains essential, but the 21st century will see an increasing partnership between these two approaches.

It is instructive that so many of the Heroes of CRISPR did their seminal work near the very start of their scientific careers (including Mojica, Horvath, Marraffini, Charpentier, Vogel, and Zhang)—in several cases, before the age of 30. With youth often comes a willingness to take risks—on uncharted directions and seemingly obscure questions—and a drive to succeed. It’s an important reminder at a time that the median age for first grants from the NIH has crept up to 42.

Notably, too, many did their landmark work in places that some might regard as off the beaten path of science (Alicante, Spain; France’s Ministry of Defense; Danisco’s corporate labs; and Vilnius, Lithuania). And, their seminal papers were often rejected by leading journals—appearing only after considerable delay and in less prominent venues. These observations may not be a coincidence: the settings may have afforded greater freedom to pursue less trendy topics but less support about how to overcome skepticism by journals and reviewers.

Finally, the narrative underscores that scientific breakthroughs are rarely eureka moments. They are typically ensemble acts, played out over a decade or more, in which the cast becomes part of something greater than what any one of them could do alone.

Warning: some people on Twitter are saying this article is basically PR for Lander’s Broad Institute, where Feng Zhang did his CRISPR work. Zhang is currently in a patent dispute over CRISPR with Jennifer Doudna.

ETA: Doudna comments on the article. And here is a “Landergate” link list.

Industry funding defeats transitivity

From The Philosophy of Evidence-Based Medicine:

[One problem] is that industry-sponsored trials are more likely to show a beneficial effect than non-industry funded trials [261,536–540]… This bias can have paradoxical consequences. For example, Heres et al. [541] examined randomized trials that compared different antipsychotic medications. They found that olanzapine beat risperidone, risperidone beat quetiapine, and quetiapine beat olanzapine! The relative success of the drugs was directly related to who sponsored the trial. For example, if the manufacturers of risperidone sponsored the trial, then risperidone was more likely to appear more effective than the others.

The reference is Heres et al. (2006).

Cochrane’s trick

From The Philosophy of Evidence-Based Medicine:

Archie Cochrane (who inspired the creation of the Cochrane Collaboration) explained what happened when he reported the preliminary results of a trial that compared home versus hospital treatment for varicose veins. The Medical Research Council gave its ethical approval, but cardiologists in the planned location of the trial (Cardiff) refused to take part because they were certain, based on their expertise, that hospital treatment was far superior…

Eventually Cochrane succeeded at beginning the trial in Bristol. Six months into the trial, the ethics committee called on Cochrane to compile and report on the preliminary results. At that stage, home care showed a slight but not statistically significant benefit. Cochrane, however, decided to play a trick on his colleagues: he prepared two reports, one with the actual number of deaths, and one with the number reversed. The rest of the story is best told from Cochrane’s perspective:

“As we were going into the committee, in the anteroom, I showed some cardiologists the results. They were vociferous in their abuse: ‘Archie,’ they said, ‘we always thought you were unethical. You must stop the trial at once.’ I let them have their way for some time and then apologised and gave them the true results, challenging them to say, as vehemently, that coronary care units should be stopped immediately. There was dead silence…”

Coyne et al. are kinda pissed about Authentic Happiness

In the following passage, Coyne et al. (2010) repeatedly cite Authentic Happiness, by positive psychology co-founder and past APA president Martin Seligman, as an example of what they’re saying is wrong with positive psychology:

Critical discussions of the potential contributions of a positive psychology have been hampered by the sloganeering of the leaders of the movement and their labeling of the alternative as a “negative psychology”…

The ridiculing of pessimists as losers in positive psycholgy self-help books, money back guarantees on websites offering personal coaches and self-help techniques claiming to promote happiness, and the presentation of pseudoscientific happiness regression equations [Happines = Set range + Circumstances + Factors under voluntary control] all… suggest that, while the leaders of positive psychology claim it to be science based, they feel free to deliver platonic noble lies to the unwashed masses…

…support for such victim blaming can come not only from the fringe, but from mainstream positive psychology. Anyone who doubts this need only to Google “positive psychology” and “coaching” and experiment by adding some names of proponents of mainstream positive psychology. They will soon be brought to websites with claims that retaining a personal coach or engaging in web-based exercises for a substantial fee is guaranteed to instill happines that lasts and that happiness is related to health. More efficiently, the skeptical reader can reach websites with similar claims by simply joining the American Psychological Association listserv Friends of Positive Psychology… and by double clicking on the web links provided in the signatures of posters there.

Data collection for the Global Burden of Disease project

From Epic Measures:

Of the 2 billion deaths since 1970 the new Global Burden would ultimately cover, only about 25 percent had been recorded in a vital registration system accessible to researchers… [Christopher Murray’s] proposal to the Gates Foundation had said the entire project would take three years to complete, giving a deadline of July 2010. Three years to gather and analyze all available details about the health of every person on Earth…

Different countries brought a varied set of challenges. In China, regulations forbade almost all core health data from leaving the country, so Chinese partners had to do analyses and share the results with Seattle. U.S. states, by contrast, sold annual databases of their in-patient hospital users to anyone in the world, for prices ranging from $35 to $2,000. In Ghana, almost the exact equivalent records were available free.

In Nigeria, Africa’s largest country by population, the data indexers surveyed hospitals, police stations, health clinics, libraries, colonial archives, and even cemetery plot records. In Libya, the latest census and civil registries turned out to be available online, but only after clicking through seven Web pages written in Arabic. In Iraq, during the end of the American-led occupation, months of spadework revealed the existence of two recent government household surveys. These would help estimate how many Iraqis were being killed or injured by war, as opposed to other causes, a hugely disputed topic. Trying e-mail, Skype, and phone, Speyer finally managed to reach the Iraqi official in charge of statistics and information technology. “She said they’d be happy to share the survey microdata with us, and I said, ‘Can you e-mail it or upload it to a website?’” he recalls. “She said no. She burned it onto a CD and told me I had to pick that up in the Baghdad Green Zone.”

…Another completely separate stream of information, and a big one, came from others’ published scientific studies. About what? About “health.” There were ten thousand articles a month published with a reference to epidemiology. To the maximum degree possible, Murray wanted all of those results pulled, digitized, and entered into Global Burden, too. Put another way, a fraction of a fraction of the data supplied to the study’s scientists was to be everything everyone else had ever discovered.

Glass invented meta-analysis to prove someone wrong

Apparently Gene Glass invented meta-analysis because he wanted to prove someone wrong:

…in the summer of 1974, I set about to do battle with Hans Eysenck and prove that psychotherapy – my psychotherapy – was an effective treatment. I joined the battle with Eysenck’s (1965) review of the psychotherapy outcome literature. Eysenck began his famous reviews by eliminating from consideration all theses, dissertations, project reports or other contemptible items not published in peer-reviewed journals. This arbitrary exclusion of literally hundreds of evaluations of therapy outcomes was indefensible to my mind. It’s one thing to believe that peer review guarantees truth; it is quite another to believe that all truth appears in peer-reviewed journals.

Next, Eysenck eliminated any experiment that did not include an untreated control group. This makes no sense whatever, because head-to-head comparisons of two different types of psychotherapy contribute a great deal to our knowledge of psychotherapy effects…

Having winnowed a huge literature down to 11 studies (!) by whim and prejudice, Eysenck proceeded to describe their findings solely in terms of whether or not statistical significance was reached at the .05 level…

Finally, Eysenck did something truly staggering in its illogic. If a study showed significant differences favoring therapy over control on what he regarded as a ‘subjective’ measure of outcome (e.g., the Rorschach or the Thematic Apperception Test), he discounted the findings entirely. So be it; he may be a tough judge, but that’s his right. But then, when encountering a study that showed differences on an ‘objective’ outcome measure (e.g., grade-point average) but no differences on a subjective measure (such as the Thematic Apperception Test), Eysenck discounted the entire study because the outcome differences were ‘inconsistent’.

Looking back on it, I can almost credit Eysenck with the invention of meta-analysis by anti-thesis. By doing everything in the opposite way that he did, one would have been led straight to meta-analysis. Adopt an a posteriori attitude toward including studies in a synthesis, replace statistical significance by measures of strength of relationship or effect, and view the entire task of integration as a problem in data analysis where ‘studies’ are quantified and the resulting database subjected to statistical analysis, and meta-analysis assumes its first formulation. Thank you, Professor Eysenck.

…[Our] first meta-analysis of the psychotherapy outcome research finished in 1974-1975 found that the typical therapy trial raised the treatment group to a level about two-thirds of a standard deviation on average above the average of untreated controls…

…[Researchers’] reactions [to the meta-analysis] foreshadowed the eventual reception of the work among psychologists. Some said that the work was revolutionary and proved what they had known all along; others said it was wrongheaded and meaningless. The widest publication of the work came in 1977, in an article by Mary Lee Smith and myself in the American Psychologist. Eysenck responded to the article by calling it ‘mega-silliness’…

Jazz and other classical musics

From Other Classical Musics: Fifteen Great Traditions, on what the book means by “classical musics” and why jazz is one of them:

The term ‘art music’ is too broad… ‘Court music’ would have worked for some traditions, but not for all; ‘classical’ is the adjective best capable of covering what every society regards as its own Great Tradition…

According to our rule-of-thumb, a classical music will have evolved in a political-economic environment with built-in continuity… where a wealthy class of connoisseurs has stimulated its creation by a quasi-priesthood of professionals; it will have enjoyed high social esteem. It will also have had the time and space to develop rules of composition and performance, and to allow the evolution of a canon of works, or forms… almost all classical music has vernacular roots, and periodically renews itself from them;

…As a newish nation whose dominant culture is essentially European, America has – like Australasia – imported Europe’s classical music, but in jazz it has its own indigenous classical form. Those in doubt as to whether jazz belongs in this book should bear in mind that its controlled-improvisatory nature aligns it with almost all other classical musics. Doubters might also consider how closely jazz’s historical trajectory mirrors that of European music, if telescoped into a much shorter time. It too has vernacular roots, and was raised by a series of master-musicians to the status of an art-music; it too has evolved via a ‘classical’ period through a succession of modernist phases, and has become every bit as esoteric as European classical modernism. Since the 1950s jazz has had its own early-music revivalists (from trad bands to Wynton Marsalis) and, again like Western classical music, it too seems unsure where to go next. And now that it’s gone native on every continent, jazz is as global as Beethoven.

Chomsky on the War on Drugs

More Chomsky, from Understanding Power (footnotes also reproduced):

So take a significant question you never hear asked despite this supposed “Drug War” which has been going on for years and years: how many bankers and chemical corporation executives are in prison in the United States for drug-related offenses? Well, there was recently an O.E.C.D. [Organization for Economic Cooperation and Development] study of the international drug racket, and they estimated that about a half-trillion dollars of drug money gets laundered internationally every year—more than half of it through American banks. I mean, everybody talks about Colombia as the center of drug-money laundering, but they’re a small player: they have about $10 billion going through, U.S. banks have about $260 billion.

Okay, that’s serious crime—it’s not like robbing a grocery store. So American bankers are laundering huge amounts of drug money, everybody knows it: how many bankers are in jail? None. But if a black kid gets caught with a joint, he goes to jail.

…Or why not ask another question — how many U.S. chemical corporation executives are in jail? Well, in the 1980s, the C.I.A. was asked to do a study on chemical exports to Latin America, and what they estimated was that more than 90 percent of them are not being used for industrial production at all — and if you look at the kinds of chemicals they are, it’s obvious that what they’re really being used for is drug production. Okay, how many chemical corporation executives are in jail in the United States? Again, none — because social policy is not directed against the rich, it’s directed against the poor.

Alex Ross on Mozart, and on the avantgarde

From The Storm of Style:

What Mozart might have done next [if he hadn’t died young] is no one’s guess. The pieces that emerged from the suddenly productive year 1791 — The Magic Flute, the ultimate Leopoldian synthesis of high and low; La Clemenza di Tito, a robust revival of the aging art of opera seria; the silken lyricism of the Clarinet Concerto; the Requiem, at once cerebral and raw — form a garden of forking paths. Mozart was still a young man, discovering what he could do. In the unimaginable alternate universe in which he lived to the age of seventy, an anniversary-year essay might have contained a sentence such as this: “Opera houses focus on the great works of Mozart’s maturity — The Tempest, Hamlet, the two-part Faust — but it would be a good thing if we occasionally heard that flawed yet lively work of his youth, Don Giovanni.”

And, on a totally different topic, from Listen to This:

Picture music as a map, and musical genres as continents— classical music as Europe, jazz as America, rock as Asia. Each genre has its distinct culture of playing and listening. Between the genres are the cold oceans of taste, which can be cruel to musicians who try to cross over. There are always brave souls willing to make the attempt: Aretha Franklin sings “Nessun dorma” at the Grammys; Paul McCartney writes a symphony; violinists perform on British TV in punk regalia or lingerie. Such exploits get the kind of giddy attention that used to greet early aeronautical feats like Charles Lindbergh’s solo flight and the maiden voyage of the Hindenburg. There is another route between genres.

It’s the avant-garde path— a kind of icy Northern Passage that you can traverse on foot. Practitioners of free jazz, underground rock, and avant-garde classical music are, in fact, closer to one another than they are to their less radical colleagues. Listeners, too, can make unexpected connections in this territory. As I discovered in my college years, it is easy to go from the orchestral hurly-burly of Xenakis and Penderecki to the free-jazz piano of Cecil Taylor and the dissonant rock of Sonic Youth. For lack of a better term, call it the art of noise.

“Noise” is a tricky word that quickly slides into the pejorative. Often, it’s the word we use to describe a new kind of music that we don’t understand. Variations on the put-down “That’s just noise” were heard at the premiere of Stravinsky’s Rite of Spring, during Dylan’s first tours with a band, and on street corners when kids started blasting rap. But “noise” can also accurately describe an acoustical phenomenon, and it needn’t be negative. Human ears are attracted to certain euphonious chords based on the overtone series; when musicians pile on too many extraneous tones, the ear “maxes out.” This is the reason that free jazz, experimental rock, and experimental classical music seem to be speaking the same language: from the perspective of the panicking ear, they are. It’s a question not of volume but of density. There is, however, pleasure to be had in the kind of harmonic density that shatters into noise. The pleasure comes in the control of chaos, in the movement back and forth across the border of what is comprehensible.

Academic music criticism of popular music

Academic writing on music is often pretty amusing and interesting, especially when it discusses popular artists.

For example, here is Brad Osborn on Radiohead, from a recent issue of Perspectives of New Music:

The British rock group Radiohead has carved out a unique place in the post-millennial rock milieu by tempering their highly experimental idiolect with structures more commonly heard in Top Forty rock styles. In what I describe as a Goldilocks principle, much of their music after OK Computer (1997) inhabits a space between banal convention and sheer experimentation — a dichotomy which I have elsewhere dubbed the ‘Spears–Stockhausen Continuum.’ In the timbral domain, the band often introduces sounds rather foreign to rock music such as the ondes Martenot and highly processed lead vocals within textures otherwise dominated by guitar, bass, and drums (e.g., ‘The National Anthem,’ 2000), and song forms that begin with paradigmatic verse–chorus structures often end with new material instead of a recapitulated chorus (e.g., ‘All I Need,’ 2007). In this article I will demonstrate a particular rhythmic manifestation of this Goldilocks principle known as Euclidean rhythms. Euclidean rhythms inhabit a space between two rhythmic extremes, namely binary metrical structures with regular beat divisions and irregular, unpredictable groupings at multiple levels of structure.

[Read more…]

A few bites from Superforecasting

Wish I could get my hands on this:

Doug knows that when people read for pleasure they naturally gravitate to the like-minded. So he created a database containing hundreds of information sources—from the New York Times to obscure blogs—that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity. Thanks to Doug’s simple invention, he is sure to constantly encounter different perspectives.

[Read more…]

Effective altruism definitions

Everyone has their own, it seems:

…the basic tenet of Effective Altruism: leading an ethical life involves using a portion of personal assets and resources to effectively alleviate the consequences of extreme poverty.

From the latest Life You Can Save newsletter.