- Which pieces of “philosophically interesting” science fiction do professional philosophers recommend?
- An index for John Danaher’s philosophical analyses of intelligence explosions and advanced robotics.
- Dewey, “Long-term strategies for ending existential risk from fast takeoff.”
- Hibbard, “Ethical artificial intelligence” (book draft, 165 pages, 34 figures).
- Johnson, How We Got to Now
- Bryson, At Home
- Kean, The Violinist’s Thumb
- Isaacson, The Innovators
- Munroe, What If?
- Schmidt & Rosenberg, How Google Works
- Harris, The Nurture Assumption (abridged version)
- Wu, The Master Switch
- Gawande, Being Mortal
- Buss & Meston, Why Women Have Sex
The Goal is the worst novel I’ve ever read, and the first long piece of fiction I’ve finished in the past 6 years. How is that possible?
The novel is actually an introduction to Goldratt’s theory of constraints, from operations research. The writing isn’t supposed to be good, it’s just supposed to drive home the principles of Goldratt’s theory clearly and efficiently. I think I was able to finish this one because the book wasn’t trying to do all the normal literary things that good novels try to do, and instead was clearly trying to teach me things, like a nonfiction book would. And because it was explained with a story, I’ll probably remember the core principles of The Goal better than I remember the key points of most nonfiction books I read.
The audiobook is especially amusing. Every character is played by a different voice actor, there are ambient background noises that fit the scene, and the musical queues are often hilarious, such as the vaguely romantic hold music that plays while the main character and his wife make up after a fight and speak corny romance dialogue.
Dweck’s Mindset — see this summary — was alternately decent and annoying. I do suspect there’s something to the growth/fixed mindset distinction, but Dweck downplays individual differences too much, glosses over conflicting studies, and waves suggestively in the direction of debunked blank slate hypotheses.
Pollan’s The Omnivore’s Dilemma is well-written but the arguments aren’t consistently compelling.
Harris’ 10% Happier was moderately enjoyable but doesn’t have much content.
- Google continues gobbling up AI talent.
- New-ish psychology study method: “preregistered adversarial collaboration.” From the abstract: “Prior to data collection, the [disagreeing researchers] reached consensus on an optimal research design, formulated their expectations, and agreed to submit the findings to an academic journal regardless of the outcome… [they also] set up a publicly available… agreement that detailed the proposed design and all forseeable aspects of the data analysis.”
- I just noticed the WTF, Evolution? book is out.
I like high-quality written debates for which (1) one ‘top thinker’ on the subject makes an argument, (2) four or more other top thinkers on the subject reply, and (3) then the first author writes a final reply to his or her critics. I think of these as “symposium-style debates,” as contrasted with e.g. the Oxford-style debates seen in Economist Debates and elsewhere. (Is there another name for them?)
- Boston Review‘s Forum
- Cato Unbound
- Brain and Behavioral Sciences
- Psychological Inquiry
- American Journal of Bioethics (almost: there’s no final reply in this case)
- Various journal special issues and academic edited volumes that serve as symposia for published books or invited papers/chapters
Know of other examples?
- Hsu, “Super-intelligent humans are coming.”
- Winikoff, “Assurance of agent systems: what role should formal verification play?“
- New Friendly AI open problem description: “Corrigibility.”
- Wrangham’s Catching Fire
- Brooks’ Business Adventures
- Wilson’s The Social Conquest of Earth
- Harris’ Waking Up
- Hvistendah, Unnatural Selection
- Ridley, Genome
- Rosen, The Most Powerful Idea of the World
- Morell, Animal Wise
Pinker’s The Blank Slate was excellent, and is now perhaps the #1 book I want someone to have read before I will debate social justice issues with them. (Note that as with most cognitive science books that cover this much detail, a few findings claimed herein are now out of date. E.g. at Blank Slate‘s time of release, it looked like IGF2R was a major gene for IQ, but this result failed to replicate.)
Murray’s Coming Apart was interesting as usual. His arguments for the increasing cultural split between classes in America were fairly persuasive. I ignored the parts about his policy recommendations. In the middle of the book there’s an amusing quiz you should take to learn whether you are a bubble-living elitist. Sample questions: “Who is Jimmie Johnson?” and “What is Branson?” If you’re a bubble-living elitist, you probably have no idea.
I scored 35, nearest to Murray’s “typical” score for “a first-generation upper-middle-class person with middle-class parents,” which does in fact describe me best of Murray’s available score interpretations. I scored 29 points on the first 7 questions about my early life in rural Minnesota, and only 6 points on all the remaining questions, 4 of which came from watching lots of movies (including popular ones).
The Sense of Style, which I downloaded from Audible at 12:30am on September 30th (its release date) and finished before the end of the day, is clearly the best style manual now available, though unfortunately it is not also a complete guide to How to Write as Well as Steven Pinker.
- Statistics prof Olle Häggström collects his blog posts about Superintelligence.
- “Governing cognitive biases: case studies of the use of… behaviorally informed policy tools.”
- Haidt & Tetlock & company, “Political diversity will improve social psychological science” (forthcoming in BBS).
Peter McCluskey pointed me to a nice explanation by Brian Greene of an experiment that could theoretically distinguish the Copenhagen and Many Worlds interpretations of quantum mechanics. This is from The Hidden Reality, ch. 8, endnote 12:
Here is a concrete in-principle experiment for distinguishing the Copenhagen and Many Worlds approaches. An electron, like all other elementary particles, has a property known as spin. Somewhat as a top can spin about an axis, an electron can too, with one significant difference being that the rate of this spin—regardless of the direction of the axis—is always the same. It is an intrinsic property of the electron, like its mass or its electrical charge. The only variable is whether the spin is clockwise or counterclockwise about a given axis. If it is counterclockwise, we say the electron’s spin about that axis is up; if it is clockwise, we say the electron’s spin is down. Because of quantum mechanical uncertainty, if the electron’s spin about a given axis is definite—say, with 100 percent certainty its spin is up about the z-axis—then its spin about the x- or y-axis is uncertain: about the x-axis the spin would be 50 percent up and 50 percent down; and similarly for the y-axis.
Imagine, then, starting with an electron whose spin about the z-axis is 100 percent up and then measuring its spin about the x-axis. According to the Copenhagen approach, if you find spin-down, that means the probability wave for the electron’s spin has collapsed: the spin-up possibility has been erased from reality, leaving the sole spike at spin-down. In the Many Worlds approach, by contrast, both the spin-up and spin-down outcomes occur, so, in particular, the spin-up possibility survives fully intact.
To adjudicate between these two pictures, imagine the following. After you measure the electron’s spin about the x-axis, have someone fully reverse the physical evolution. (The fundamental equations of physics, including that of Schrödinger, are time-reversal invariant, which means, in particular, that, at least in principle, any evolution can be undone. See The Fabric of the Cosmos for an in-depth discussion of this point.) Such reversal would be applied to everything: the electron, the equipment, and anything else that’s part of the experiment. Now, if the Many Worlds approach is correct, a subsequent measurement of the electron’s spin about the z-axis should yield, with 100 percent certainty, the value with which we began: spin-up. However, if the Copenhagen approach is correct (by which I mean a mathematically coherent version of it, such as the Ghirardi-Rimini-Weber formulation), we would find a different answer. Copenhagen says that upon measurement of the electron’s spin about the x-axis, in which we found spin-down, the spin-up possibility was annihilated. It was wiped off reality’s ledger. And so, upon reversing the measurement we don’t get back to our starting point because we’ve permanently lost part of the probability wave. Upon subsequent measurement of the electron’s spin about the z-axis, then, there is not 100 percent certainty that we will get the same answer we started with. Instead, it turns out that there’s a 50 percent chance that we will and a 50 percent chance that we won’t. If you were to undertake this experiment repeatedly, and if the Copenhagen approach is correct, on average, half the time you would not recover the same answer you initially did for the electron’s spin about the z-axis. The challenge, of course, is in carrying out the full reversal of a physical evolution. But, in principle, this is an experiment that would provide insight into which of the two theories is correct.
I’m not a physicist, and I don’t know whether this account is correct. Does anyone dispute it?
Further references on the subject are at Wikipedia.
In any case, such an experiment seems far beyond our reach. But since I’m Bayesian rather than Popperian, I put substantially more probability mass on MWI than Copenhagen even in the absence of definitive experiment. 😉
My apologies in advance to the computer science journalists I haven’t found yet, but…
Why is there so little good long-form computer science journalism? (Tech journalism doesn’t count.)
Several others sciences attract plenty of writing talent as well. Physics has Sean Carroll, Stephen Hawking, Brian Greene, Kip Thorne, Lawrence Krauss, Neil deGrasse Tyson, etc. Psychology has Steven Pinker, Richard Wiseman, Oliver Sacks, V.S. Ramachandran, etc. Medical science has Atul Gawande, Ben Goldacre, Siddhartha Mukherjee, etc.
Outside Aaronson and Hayes, I mostly see tech journalism, very brief CS news articles, mediocre CS writing, and occasional CS articles and books from good writers who cover a range of scientific disciplines, such as
- The Proof in the Quantum Pudding by Erica Klarreich
- Approximately Hard by Erica Klarreich
- The Future Fabric of Data Science by Jennifer Oullette
- The Mathematical Shape of Things to Come by Jennifer Oullette
- The last few chapters of The Code Book by Simon Singh
- Turing’s Cathedral by George Dyson
- Nine Algorithms That Changed the Future by John MacCormick
- The Golden Ticket by Lance Fortnow
Maybe CS is too mathematical to attract general readers? Too abstract? Too dry? Or simply not taught in high school like the other sciences? Or maybe there are problems on the supply side?
- The Center for Effective Altruism reports on outcomes from their 10+ meetings with UK policymakers so far.
- Pinker on Ivy League education (very good).
- A profile of Martine Rothblatt: “Futurist, pharma tycoon, satellite entrepreneur, philosopher. Martine Rothblatt, the highest-paid female executive in America, was born male. But that is far from the thing that defines her. Just ask her wife. Then ask the robot version of her wife.”
- Okay, good, so I won’t read the new Fukuyama books.
- AI evaluation: past, present, and future.
- Google just got serious about building a quantum computer. (Martinis, not D-Wave.)
- Chalmers’ guidelines for constructive debate and discussion. I’ve been wanting to write a post about the epistemic virtue of kindness, but this will do for now.
Lobbying and Policy Change by Baumgartner et al. is the best book on policy change I’ve read. Hat tip to Holden Karnofsky for recommending this and also Poor Economics, the best book on global poverty reduction I’ve read.
LaPC is perhaps the most data-intensive study of “Who wins in Washington and why?” ever conducted, and the data (and many follow-up studies) are available from the UNC project website here. One review summarized the study design like this:
To start, [the researchers] sample from a comprehensive list of House and Senate lobbying disclosure reports to identify a random universe of participants. After initial interviews with their sample population, the authors assemble a list of 98 issues on which each organizational representative had worked most recently [from 1999-2002, i.e. during two Presidents of opposite parties and two Congresses]. These range from patent extension to chiropractic coverage under Medicare, some very broad and some very specific. Interviewers endeavored to determine the relevant sides of each issue and identify its key players. Separate subsequent interviews were then arranged where possible with representatives from each side of the issue…
With this starting point, the researchers followed their sample of issues for several more years to track who got what they wanted and who didn’t.
Note that their issue sampling method favors issues in which Congress was involved, so “issues relating to the judiciary and that are solely agency-related may be undercounted.”
LaPC is a difficult book to summarize, but below is one attempt. Some findings were surprising, others were not.
- One of the best predictors of lobbying success is simply whether one is trying to preserve the status quo, and in fact the single most common lobbying goal is to preserve the status quo.
- Some issues had as many as 7 sides, but most had just two.
- Most lobbying is targeted at a small percentage of issues.
- Very few neutral decision-makers are involved. Where government officials are involved, they are almost always actively lobbying for one side or another. 40% of advocates in this study were government officials; only 60% were lobbyists.
- Which kinds of groups were represented by the lobbyists? 26% were citizen groups, 21% were trade/business associations, 14% were corporations, 11% were professional associations, 7% were coalitions specific to an issue, 6% were unions, and 6% were think tanks.
- The most common lobbying issues were, in descending order: health (21%), environment (13%), transportation (8%), science and technology (7%), finance and commerce (7%), defense (7%), foreign trade (6%), energy (5%), law, crime, and family policy (5%), and education (5%).
- When lobbying, it’s better to be wealthy than poor, but there’s only a weak link between resources and policy-change success.
- Policy change tends not to be incremental except in a few areas such as the budget. For most issues, a “building tension then sudden substantial change” model predicts best.
- There is substantial correlation between electoral change and policy change, and advocates have increasingly focused on electoral efforts.
If you’re interested in this area, the next book to read is probably Godwin et al’s Lobbying and Policymaking, another decade-long study of policymaking that is largely framed as a reply to LaPC, and was recommended by Baumgartner.
- Max Tegmark and Eliezer Yudkowsky on AI goal retention / ontological crises.
- Superintelligence is out in the USA, and in audiobook and Kindle formats. MIRI is hosting an online reading group for the book.
- Nature on brilliant scientists who leave academia.
- FP provides an update on Honduras’ charter cities.