- Relative to last time I listened through Scaruffi’s rock history (>8 years ago IIRC), my tastes have evolved quite a lot. I notice I’m more quickly bored by most forms of pop, punk, and heavy metal than I used to be. The genre I now seem to most reliably enjoy is the experimental end of prog-rock (e.g. avant-prog, zeuhl). I also enjoy jazz-influenced rock a lot more this time, presumably in part because I listened through Scaruffi’s jazz history (and made this guide) a couple years ago.
- I am more convinced than ever that tons of great musical ideas, even just within the “rock” paradigm, have never been explored. I’m constantly noticing things like “Oh, you know what’d be awesome? If somebody mixed the rhythm section of A with the suite structure of B and the production approach of C.” And because my listen through rock history has been so thorough this time (including thousands of artists not included in Scaruffi’s history), I’m more confident than ever that those ideas simply have never been attempted. It’s been a similar experience to studying a wide variety of scientific fields: the more topics and subtopics you study, the more you realize that the “surface area” between current scientific knowledge and what is currently unknown is even larger than you could have seen before.
- I still usually dislike “death growl” singing, traditional opera singing, and most rapping. I wish there were more “instrumental only” releases for these genres so I could have a shot at enjoying them.
- Spotify’s catalogue is very choppy. E.g. Spotify seems to have most of the albums from chapter 4.12 of Scaruffi’s history, and very few albums from chapter 4.13. (I assume this is also true for iTunes and other streaming providers.)
If you wanted to communicate as much as possible to someone about your worldview by asking them to read just five books, which five books would you choose?
My choices are below. If you post your answer to this question to Twitter, please use the hash tag #WorldviewIn5Books (like I did), so everyone posting their list can find each other.
1. Eliezer Yudkowsky, Rationality: From AI to Zombies
A singular introduction to critical thinking, rationality, and naturalistic philosophy. Both more advanced and more practically useful than any comparable guide I’ve encountered.
2. Sean Carroll, The Big Picture
If Yudkowsky’s book is “how to think 101,” then Carroll’s book is “what to think 101,” i.e. an introduction to what exists and how it works, according to standard scientific naturalism.
3. William MacAskill, Doing Good Better
My current favorite “how to do good 101” book, covering important practical considerations such as scale of impact, tractability, neglectedness, efficiency, cause neutrality, counterfactuals, and some strategies for thinking about expected value across diverse cause areas.
Importantly, it’s missing (a) a quick survey of the strongest arguments for and against utilitarianism, and (b) much discussion of near-term vs. animal-inclusive vs. long-term views and their implications (when paired with lots of empirical facts). But those topics are understandably beyond the book’s scope, and in any case there aren’t yet any books with good coverage of (a) and (b), in my opinion.1
4. Steven Pinker, Enlightenment Now
Almost everything has gotten dramatically better for humans over the past few centuries, likely substantially due to the spread and application of reason, science, and humanism.2
5. Toby Ord, forthcoming book about the importance of the long-term future
Yes, listing a future book is cheating, but I’m doing it anyway. The importance of the long-term future plays a big role in my current worldview, but there isn’t yet a book that captures my views on the topic well, and from my correspondence with Toby so far, I suspect his forthcoming book on the topic will finally do the topic justice. While you’re waiting for the book to be released, you can get a preview via this podcast interview with Toby.
A few notes about my choices
- These aren’t my favorite books, nor the books that most influenced me historically. Rather, these are the books that best express key aspects of my worldview. In other words, they are the books I’d most want someone else to read first if we were about to have a long and detailed debate about something complicated, so they’d have some sense of “where I’m coming from.”
- Obviously, there is plenty in these books that I disagree with.
- I didn’t include any giant college textbooks or encyclopedias; that’d be cheating.
- I wish there was a book that summarized many of my key political views, but in my case, I doubt any such book exists.
- Economic thinking also plays a big role in my worldview, but I’ve not yet found a book that I think does a good job of integrating economic theory with careful, skeptical discussions of the most relevant empirical data (which often come from fields outside economics, and often differ from the predictions of economic models) across a decent range of the most important questions in economics.3
- These books are all quite recent. Older books suffer from their lack of access to recent scientific and philosophical progress, for example (a) the last several decades of the cognitive science of human reasoning, (b) the latest estimates of the effectiveness of various interventions to save and improve people’s lives, (c) the latest historical and regional estimates of various aspects of human well-being and their correlates, and (d) recent arguments about moral uncertainty and what to do about it.
As always, these are my views and not my employer’s.
- On utilitarianism, there are of course books such as Utilitarianism: A Very Short Introduction, Utilitarianism: For and Against, The Cambridge Companion to Utilitarianism, Practical Ethics, and Moral Tribes, but these books don’t much discuss what I consider to be the strongest arguments for utilitarianism, in particular some points related to what we’d value if we knew more and thought longer and some other arguments discussed briefly in this interview (starting at “What are the arguments for classical utilitarianism?” in the transcript). [↩]
- Pinker’s chapter on existential risk is the one I most disagree with. [↩]
- Example books that do this fairly well on particular narrow topics include Roodman’s Due Diligence and Caplan’s The Case Against Education. [↩]
Many people these days talk about an impending “fourth industrial revolution” led by AI, the internet of things, 3D printing, quantum computing, and more. The first three revolutions are supposed to be:
- 1st industrial revolution (~1800-1870): the world industrializes for the first time via steam, textiles, etc.
- 2nd industrial revolution (1870-1914): continued huge growth via steel, oil, other things, and especially electricity.
- 3rd industrial revolution (1980-today): personal computers, internet, etc.
I think this is a misleading framing for the last few centuries, though, because one of these things is not remotely like the others. As far as I can tell, the major curves of human well-being and empowerment bent exactly once in recorded history, during the “1st” industrial revolution:
(And yes, there’s still a sharp jump around 1800-1870 if you chart this on a log scale.)
The “2nd” and “3rd” industrial revolutions, if they are coherent notions at all, merely continued the new civilizational trajectory created by the “1st” industrial revolution.
I think this is important for thinking about how big certain future developments might be. For example, authors of papers at some top machine learning conference seem to think there’s a decent chance that “unaided machines [will be able to] accomplish every task better and more cheaply than human workers” sometime in the next few decades. There’s plenty of reason to doubt this aggregate forecast,1 but if that happens, I think the impact would likely be on the scale of the (original) industrial revolution, rather than that of e.g. the (so small it’s hard to measure?) impact of the “3rd” industrial revolution. But for some other technologies (e.g. “internet of things”), it’s hard to tell a story for how it could possibly be as big a deal as the original industrial revolution.
- E.g. answers differ depending on how you ask the question, we should worry about response bias, and it’s not clear whether AI scientists or anyone else can make reliable long-term forecasts of this sort. [↩]
Note: As usual, these are my personal guesses and opinions, not those of my employer.
In How big a deal was the Industrial Revolution?, I looked for measures (or proxy measures) of human well-being / empowerment for which we have “decent” scholarly estimates of the global average going back thousands of years. For reasons elaborated at some length in the full report, I ended up going with:
- Physical health, as measured by life expectancy at birth.
- Economic well-being, as measured by GDP per capita (PPP) and percent of people living in extreme poverty.
- Energy capture, in kilocalories per person per day.
- Technological empowerment, as measured by war-making capacity.
- Political freedom to live the kind of life one wants to live, as measured by percent of people living in a democracy.
(I also especially wanted measures of subjective well-being and social well-being, and also of political freedom as measured by global rates of slavery, but these data aren’t available; see the report.)
The HYDE project provides the most comprehensive and up-to-date synthesis of historical, global population estimates I know of. To make these estimates slightly easier to use, I created a spreadsheet of the baseline scenario population data from the most recent version, HYDE 3.2.
For explanations, see the spreadsheet and Klein Goldewijk et al. (forthcoming).
In various places on my old atheism blog, I share advice for religious believers who are struggling with their faith, or who have recently deconverted, and who are feeling a bit lost, worried about nihilism without religion, and so on.
Here is my “FAQ for the sort of person who usually contacts me about how they’re struggling with their faith, or recently deconverted.”
Now that I’m losing my faith, I’m worried that nothing really matters, and that’s depressing.
I remember that feeling. I was pretty anxious and depressed when I started to realize I didn’t have good reasons for believing the doctrines of the religion I’d been raised in. But as time passed, things got better, and I emotionally adjusted to my “new normal,” in a way that I thought couldn’t ever happen before I got there.
I’ve collected some recommended reading on these topics here; see also the more recent The Big Picture. It’s up to you to decide what your goals and purposes are, but I think there are plenty of purposes worth getting excited about and invested in. In my case that’s effective altruism, but that’s a personal choice.
But really, my primary piece of advice is to just let more time pass, and spend time socially with non-religious people. Your conscious, deliberative brain (“system 2“) might be able to rationally recognize that of course millions of non-religious people before you have managed to live lives of immense joy and purpose and so on, and therefore you clearly don’t need religion for that. But if you were raised religiously like I was, then it might take some time for your unconscious, intuitive, emotional brain (“system 1“) to also “believe” this. The more time you spend talking with non-religious people who are living fulfilling, purposeful lives, the more you’ll train your system 1 to see that it’s obvious that meaning and purpose are possible without any gods — and getting your system 1 to “change its mind” is probably what matters more.
Where I live in the San Francisco Bay Area, it seems that most people I meet are excitedly trying to “make the world a better place” in some way (as parodied on the show Silicon Valley), and virtually none of them are religious. Depending on where you live, it might not be quite so easy to find non-religious people to hang out with. You could google for atheist or agnostic meetups in your area, or at least in the nearest large city. You could also try attending a UU church, where most people seem to be “spiritual” but not “religious” in the traditional sense.
My spouse and/or kids are religious, and my loss of faith is going to be super hard on them.
Yeah, that’s a tougher situation. I don’t know anything about that. Fortunately there’s a recent book entirely about that subject; I hope it helps!
Thanks, I’ll try those things. But I think I need more help.
I would try normal psychotherapy if you can afford it. Or maybe better, try Tom Clark, who specializes in “worldview counseling.”
Other Classical Musics argues that there are at least 15 musical traditions around the world worthy of the title “classical music”:
According to our rule-of-thumb, a classical music will have evolved… where a wealthy class of connoisseurs has stimulated its creation by a quasi-priesthood of professionals; it will have enjoyed high social esteem. It will also have had the time and space to develop rules of composition and performance, and to allow the evolution of a canon of works, or forms… our definition does imply acceptance of a ‘classical/ folk-popular’ divide. That distinction is made on the assumption that these categories simply occupy opposite ends of a spectrum, because almost all classical music has vernacular roots, and periodically renews itself from them…
In one of the earliest known [Western] definitions, classique is translated as ‘classical, formall, orderlie, in due or fit ranke; also, approved, authenticall, chiefe, principall’. The implication there was: authority, formal discipline, models of excellence. A century later ‘classical’ came to stand also for a canon of works in performance. Yet almost every non-Western culture has its own concept of ‘classical’ and many employ criteria similar to the European ones, though usually with the additional function of symbolizing national culture…
By definition, the conditions required for the evolution of a classical music don’t exist in newly-formed societies: hence the absence of a representative tradition from South America.
I don’t understand the book’s criteria. E.g. jazz is included despite not having been created by “a quasi-priesthood of professionals” funded by “a wealthy class of connoisseurs,” and despite having been invented relatively recently, in the early 20th century.
In The Age of Em, Robin Hanson is pretty optimistic about our ability to forecast the long-term future:
Some say that there is little point in trying to foresee the non-immediate future. But in fact there have been many successful forecasts of this sort.
In the rest of this section, Hanson cites eight examples of forecasting success.1 Two of his examples of “success” are forecasts of technologies that haven’t arrived yet: atomically precise manufacturing and advanced starships. Another of his examples is The Year 2000:
A particularly accurate book in predicting the future was The Year 2000, a 1967 book by Herman Kahn and Anthony Wiener (Kahn and Wiener 1967). It accurately predicted population, was 80% correct for computer and communication technology, and 50% correct for other technology (Albright 2002).
As it happens, when I first read this paragraph I had already begun to evaluate the technology forecasts from The Year 2000 for the Open Philanthropy Project, relying on the same source Hanson did for determining which forecasts came true and which did not (Albright 2002).
However, my assessment of Kahn & Wiener’s forecasting performance is much less rosy than Hanson’s. For details, see here.
- The Nagy et al. paper, the Charbonneau et al. paper, Albright’s evaluation of forecasts from The Year 2000, the forecasts of John Watkins, the space travel forecasts of Konstantin Tsiolkovsky, the nanotechnology forecasts of K. Eric Drexler, the book on starship designs edited by Benford and Benford, and the 1999 business book by Shapiro and Varian. See Hanson’s book for sources. [↩]
In an interesting short paper from 1993, Bernard Baars and Katharine McGovern list several philosophical “habits of mind” and contrast them with typical scientific habits of mind. The philosophical habits of mind they list, somewhat paraphrased, are:
- A great preference for problems that have survived centuries of debate, largely intact.
- A tendency to set the most demanding criteria for success, rather than more achievable ones.
- Frequent appeal to thought experiments (rather than non-intuitional evidence) to carry the major burden of argument.
- More focus on rhetorical brilliance than testability.
- A delight in paradoxes and “impossibility proofs.”
- Shifting, slippery definitions.
- A tendency to legislate the empirical sciences.
I partially agree with this list, and would add several items of my own.
Obviously this list does not describe all of philosophy. Also, I think (English-language) philosophy as a whole has become more scientific since 1993.
Philosopher Daniel Dennett advocates following “Rapoport’s Rules” when writing critical commentary. He summarizes the first of Rapoport’s Rules this way:
You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way.”
If you’ve read many scientific and philosophical debates, you’re aware that this rule is almost never followed. And in many cases it may be inappropriate, or not worth the cost, to follow it. But for someone like me, who spends a lot of time trying to quickly form initial impressions about the state of various scientific or philosophical debates, it can be incredibly valuable and time-saving to find a writer who follows Rapoport’s First Rule, even if I end up disagreeing with that writer’s conclusions.
One writer who, in my opinion, seems to follow Rapoport’s First Rule unusually well is Dennett’s “arch-nemesis” on the topic of consciousness, the philosopher David Chalmers. Amazingly, even Dennett seems to think that Chalmers embodies Rapoport’s 1st Rule. Dennett writes:
Chalmers manifestly understands the arguments [for and against type-A materialism, which is Dennett’s view]; he has put them as well and as carefully as anybody ever has… he has presented excellent versions of [the arguments for type-A materialism] himself, and failed to convince himself. I do not mind conceding that I could not have done as good a job, let alone a better job, of marshaling the grounds for type-A materialism. So why does he cling like a limpet to his property dualism?
As far as I can tell, Dennett is saying “Thanks, Chalmers, I wish I’d thought of putting the arguments for my view that way.”1
And because of Chalmers’ clarity and fairness, I have found Chalmers’ writings on consciousness to be more efficiently informative than Dennett’s, even though my own current best-guesses about the nature of consciousness are much closer to Dennett’s than to Chalmers’.
Contrast this with what I find to be more typical in the consciousness literature (and in many other literatures), which is for an article’s author(s) to present as many arguments as they can think of for their own view, and downplay or mischaracterize or not-even-mention the arguments against their view.
I’ll describe one example, without naming names. Recently I read two recent papers, each of which had a section discussing the evidence for or against the “cortex-required view,” which is the view that a cortex is required for phenomenal consciousness. (I’ll abbreviate it as “CRV.”)
The pro-CRV paper is written as though it’s a closed case that a cortex is required for consciousness, and it doesn’t cite any of the literature suggesting the opposite. Meanwhile, the anti-CRV paper is written as though it’s a closed case that a cortex isn’t required for consciousness, and it doesn’t cite any literature suggesting that it is required. Their differing passages on CRV cite literally zero of the same sources. Each paper pretends as though the entire body of literature cited by the other paper just doesn’t exist.
If you happened to read only one of these papers, you’d come a way with a very skewed view of the likelihood of the cortex-required view. You might realize how skewed that view is later, but if you’re reading only a few papers on the topic, so that you can form an impression quickly, you might not.
So here’s one tip for digging through some literature quickly: try to find out which expert(s) on that topic, if any, seem to follow Rapoport’s First Rule — even if you don’t find their conclusions compelling.
Tim Minchin once said “Every mystery ever solved has turned out to be not magic.” One thing I want to understand better is “How, exactly, has that happened in history? In particular, how have our naive pre-scientific concepts evolved in response to, or been eliminated by, scientific progress?
Examples: What is the detailed story of how “water” came to be identified with H2O? How did our concept of “heat” evolve over time, including e.g. when we split it off from our concept of “temperature”? What is the detailed story of how “life” came to be identified with a large set of interacting processes with unclear edge cases such as viruses decided only by convention? What is the detailed story of how “soul” was eliminated from our scientific ontology rather than being remapped onto something “conceptually close” to our earlier conception of it, but which actually exists?
I wish there was a handbook of detailed case studies in scientific reductionism from a variety of scientific disciplines, but I haven’t found any such book yet. The documents I’ve found that are closest to what I want are perhaps:
- Thagard’s “Conceptual Change in the History of Science: Life, Mind, and Disease” and his earlier Conceptual Revolutions
- Grisdale’s Conceptual Change: Gods, Elements, and Water
- Laureys’ “Death, unconsciousness and the brain“
- Chang’s Inventing Temperature, Is Water H2O?, and some of his papers
- Chalmers’ The Scientist’s Atom and the Philosopher’s Stone
Some semi-detailed case studies also show up in Kuhn, Feyerabend, etc. but they are typically buried in a mass of more theoretical discussion. I’d prefer to read histories that focus on the historical developments.
Got any such case studies, or collections of case studies, to recommend?
Sean Carroll’s The Big Picture is a pretty decent “worldview naturalism 101” book.
In case there’s a 2nd edition in the future, and in case Carroll cares about the opinions of a professional dilettante (aka a generalist research analyst without even a bachelor’s degree), here are my requests for the 2nd edition:
- I think Carroll is too quick to say which physicalist approach to phenomenal consciousness is correct, and doesn’t present alternate approaches as compellingly as he could (before explaining why he rejects them). (See especially chs. 41-42.)
- In the chapter on death, I wish Carroll had acknowledged that neither physics nor naturalism requires that we live lives as short as we now do, and that there are speculative future technological capabilities that might allow future humans (or perhaps some now living) to live very long lives (albeit not infinitely long lives).
- I wish Carroll had mentioned Tegmark levels, maybe in chs. 25 or 36.
In my 2013 article on strong AI forecasting, I made several suggestions for how to do better at forecasting strong AI, including this suggestion quoted from Phil Tetlock, arguably the leading forecasting researcher in the world:
Signposting the future: Thinking through specific scenarios can be useful if those scenarios “come with clear diagnostic signposts that policymakers can use to gauge whether they are moving toward or away from one scenario or another… Falsifiable hypotheses bring high-flying scenario abstractions back to Earth.”
Tetlock hadn’t mentioned strong AI at the time, but now it turns out he wants suggestions for strong AI signposts that could be forecast on GJOpen, the forecasting tournament platform.
@PTetlock Any thoughts on the AlphaGo victory?
— Brandon Wilson (@brandonwilson) March 20, 2016
important: it was one of our signpost indicators. It nudges probability we are on a strong AI scenario trajectory https://t.co/XuAcpjlWtm
— Philip E. Tetlock (@PTetlock) March 20, 2016
@PTetlock What other signpost indicators are you watching?
— Alexander Berger (@albrgr) March 21, 2016
— Philip E. Tetlock (@PTetlock) March 21, 2016
Specifying crisply formulated signpost questions is not easy. If you come up with some candidates, consider posting them in the comments below. After a while, I will collect them all together and send them to Tetlock. (I figure that’s probably better than a bunch of different people sending Tetlock individual emails with overlapping suggestions.)
Tetlock’s framework for thinking about such signposts, which he calls “Bayesian question clustering,” is described in Superforecasting:
In the spring of 2013 I met with Paul Saffo, a Silicon Valley futurist and scenario consultant. Another unnerving crisis was brewing on the Korean peninsula, so when I sketched the forecasting tournament for Saffo, I mentioned a question IARPA had asked: Will North Korea “attempt to launch a multistage rocket between 7 January 2013 and 1 September 2013?” Saffo thought it was trivial. A few colonels in the Pentagon might be interested, he said, but it’s not the question most people would ask. “The more fundamental question is ‘How does this all turn out?’ ” he said. “That’s a much more challenging question.”
So we confront a dilemma. What matters is the big question, but the big question can’t be scored. The little question doesn’t matter but it can be scored, so the IARPA tournament went with it. You could say we were so hell-bent on looking scientific that we counted what doesn’t count.
That is unfair. The questions in the tournament had been screened by experts to be both difficult and relevant to active problems on the desks of intelligence analysts. But it is fair to say these questions are more narrowly focused than the big questions we would all love to answer, like “How does this all turn out?” Do we really have to choose between posing big and important questions that can’t be scored or small and less important questions that can be? That’s unsatisfying. But there is a way out of the box.
Implicit within Paul Saffo’s “How does this all turn out?” question were the recent events that had worsened the conflict on the Korean peninsula. North Korea launched a rocket, in violation of a UN Security Council resolution. It conducted a new nuclear test. It renounced the 1953 armistice with South Korea. It launched a cyber attack on South Korea, severed the hotline between the two governments, and threatened a nuclear attack on the United States. Seen that way, it’s obvious that the big question is composed of many small questions. One is “Will North Korea test a rocket?” If it does, it will escalate the conflict a little. If it doesn’t, it could cool things down a little. That one tiny question doesn’t nail down the big question, but it does contribute a little insight. And if we ask many tiny-but-pertinent questions, we can close in on an answer for the big question. Will North Korea conduct another nuclear test? Will it rebuff diplomatic talks on its nuclear program? Will it fire artillery at South Korea? Will a North Korean ship fire on a South Korean ship? The answers are cumulative. The more yeses, the likelier the answer to the big question is “This is going to end badly.”
I call this Bayesian question clustering because of its family resemblance to the Bayesian updating discussed in chapter 7. Another way to think of it is to imagine a painter using the technique called pointillism. It consists of dabbing tiny dots on the canvas, nothing more. Each dot alone adds little. But as the dots collect, patterns emerge. With enough dots, an artist can produce anything from a vivid portrait to a sweeping landscape.
There were question clusters in the IARPA tournament, but they arose more as a consequence of events than a diagnostic strategy. In future research, I want to develop the concept and see how effectively we can answer unscorable “big questions” with clusters of little ones.
(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)
How much time usually elapses between when a technical problem is posed and when it is solved? How much effort is usually required? Which variables most predict how much time and effort will be required to solve a technical problem?
The main paper I’ve seen on this is Hisano & Sornette (2013).1 Their method was to start with Wikipedia’s List of conjectures and then track down the year each conjecture was first stated and the year it was solved (or, whether it remains unsolved). They were unable to determine exact-year values for 16 conjectures, leaving them with a dataset of 144 conjectures, of which 60 were solved as of January 2012, with 84 still unsolved. The time between first conjecture statement and first solution is called “time to proof.”
For the purposes of finding possible data-generating models that fit the data described above, they assume the average productivity per mathematician is constant throughout their career (they didn’t try to collect more specific data), and they assume the number of active mathematicians tracks with total human population — i.e., roughly exponential growth over the time period covered by these conjectures and proofs (because again, they didn’t try to collect more specific data).
I didn’t try to understand in detail how their model works or how reasonable it is, but as far as I understand it, here’s what they found:
- Since 1850, the number of new conjectures (that ended up being listed on Wikipedia) has tripled every 55 years. This is close to the average growth rate of total human population over the same time period.
- Given the incompleteness of the data and the (assumed) approximate exponential growth of the mathematician population, they can’t say anything confident about the data-generating model, and therefore basically fall back on Occam: “we could not reject the simplest model of an exponential rate of conjecture proof with a rate of 0.01/year for the dataset (translating into an average waiting time to proof of 100 years).”
- They expect the Wikipedia dataset severely undersamples “the many conjectures whose time-to-proof is in the range of years to a few decades.”
- They use their model to answer the question that prompted the paper, which was about the probability that “P vs. NP” will be solved by 2024. Their model says there’s a 41.3% chance of that, which intuitively seems high to me.
- They make some obvious caveats to all this: (1) the content of the conjecture matters for how many mathematician-hours are devoted to solving it, and how quickly they are devoted; (2) to at least a small degree, the notion of “proof” has shifted over time, e.g. the first proof of the four-color theorem still has not been checked from start to finish by humans, and is mostly just assumed to be correct; (3) some famous conjectures might be undecidable, leaving some probability mass for time-to-proof at infinity.
What can we conclude from this?
Not much. Sometimes crisply-posed technical problems are solved quickly, sometimes they take many years or decades to solve, sometimes they take more than a century to solve, and sometimes they are never solved, even with substantial effort being targeted at the problem.2
And unfortunately, it looks like we can’t say much more than that from this study alone. As they say, their observed distribution of time to proof must be considered with major caveats. Personally, I would emphasize the likely severe undersampling of conjectures with short times-to-proof, the fact that they didn’t try to weight data points by how important the conjectures were perceived to be or how many resources went into solving them (because doing so would be very hard!), and the fact that they didn’t have enough data points (especially given the non-stationary number of mathematicians) to confirm or reject ~any of the intuitively / a priori plausible data-generating models.
Are there other good articles3 on “time to proof” or “time to solution” for relatively well-specified research problems, in mathematics or other fields? If you know of any, please let me know!
- Slightly different arxiv version here. [↩]
- This “substantial effort” claim isn’t in the paper, but I’m pretty sure it’s true for many of the conjectures, including many of those with time to proof of >10 years). [↩]
- Besides the few that Hisano & Sornette cite, which I think are basically superceded by Hisano & Sornette. [↩]
On Facebook, AI scientist Yann LeCun recently posted the following:
I have said publicly on several occasions that the purported AI Apocalypse that some people seem to be worried about is extremely unlikely to happen, and if there were any risk of it happening, it wouldn’t be for another few decades in the future. Making robots that “take over the world”, Terminator style, even if we had the technology. would require a conjunction of many stupid engineering mistakes and ridiculously bad design, combined with zero regards for safety. Sort of like building a car, not just without safety belts, but also a 1000 HP engine that you can’t turn off and no brakes.
But since some people seem to be worried about it, here is an idea to reassure them: We are, even today, pretty good at building machines that have super-human intelligence for very narrow domains. You can buy a $30 toy that will beat you at chess. We have systems that can recognize obscure species of plants or breeds of dogs, systems that can answer Joepardy questions and play Go better than most humans, we can build systems that can recognize a face among millions, and your car will soon drive itself better than you can drive it. What we don’t know how to build is an artificial general intelligence (AGI). To take over the world, you would need an AGI that was specifically designed to be malevolent and unstoppable. In the unlikely event that someone builds such a malevolent AGI, what we merely need to do is build a “Narrow” AI (a specialized AI) whose only expertise and purpose is to destroy the nasty AGI. It will be much better at this than the AGI will be at defending itself against it, assuming they both have access to the same computational resources. The narrow AI will devote all its power to this one goal, while the evil AGI will have to spend some of its resources on taking over the world, or whatever it is that evil AGIs are supposed to do. Checkmate.
Since LeCun has stated his skepticism about potential risks from advanced artificial intelligence in the past, I assume his “not being really serious” is meant to refer to his proposed narrow AI vs. AGI “solution,” not to his comments about risks from AGI. So, I’ll reply to his comments on risks from AGI and ignore his “not being really serious” comments about narrow AI vs. AGI.
First, LeCun says:
if there were any risk of [an “AI apocalypse”], it wouldn’t be for another few decades in the future
Yes, that’s probably right, and that’s what people like myself (former Executive Director of MIRI) and Nick Bostrom (author of Superintelligence, director of FHI) have been saying all along, as I explained here. But LeCun phrases this as though he’s disagreeing with someone.
Second, LeCun writes as though the thing people are concerned about is a malevolent AGI, even though I don’t know anyone is concerned about malevolent AI. The concern expressed in Superintelligence and elsewhere isn’t about AI malevolence, it’s about convergent instrumental goals that are incidentally harmful to human society. Or as AI scientist Stuart Russell put it:
A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer’s apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker – especially one connected through the Internet to all the world’s information and billions of screens and most of our infrastructure – can have an irreversible impact on humanity.
(Note that although I work as a GiveWell research analyst, my focus at GiveWell is not AI risks, and my views on this topic are not necessarily GiveWell’s views.)
I listen to music >10 hrs per day, and I love the convenience of wireless earbuds. They are tiny and portable, and I can do all kinds of stuff — work on something with my hands, take on/off my jacket or my messenger bag, etc. — without getting tangled up in a cord.
So which wireless earbuds are the best? For this kind of thing I always turn first to The Wirecutter, which publishes detailed investigations of consumer products, like Consumer Reports but free and often more up-to-date.
I bought their recommended wireless earbuds a while back, when their recommendation was the Jaybird Bluebuds X. After several months I lost that pair and bought the new Wirecutter recommendation, the JLab Epic Bluetooth. Those were terrible so I returned them and bought the now-available Jaybird X2, which has been awesome so far.
So long as a pair of wireless earbuds have decent sound quality and >6 hrs battery life, the most important thing to me is low frequency of audio cutting.
See, Bluetooth is a very weak kind of signal. It can’t really pass through your body, for example. That’s why it uses so little battery power, which is important for tiny things like wireless earbuds. As a result, I got fairly frequent audio cutting when trying to play music from my phone in my pants pocket to my Jaybird Bluebuds X. After some experimentation, I learned that audio cutting was less frequent if my phone was in my rear pocket, on the same side of my body as the earbuds’ Bluetooth receiver. But it still cut out maybe an average of 200 times an hour (mostly concentrated in particularly frustrating 10-minute periods with lots of cutting).
When I lost that pair and got the JLab Epic Bluetooth, I hoped that with the newer pair they’d have figured out some extra tricks to reduce audio cutting. Instead, the audio cutting was terrible. Even with my phone in the optimal pants pocket, there was usually near-constant audio cutting, maybe about 2000 times an hour on average. Moreover, when I used them while reclining in bed, I would get lots of audio cutting whenever my neck was pressed up against my pillow! So, pretty useless. I returned them to Amazon for a refund.
I replaced this pair with The Wirecutter’s 2nd choice, the Jaybird X2. So far these have been fantastic. In my first ~15 hours of using them I’ve gotten exactly two split-second audio cuts.
So if you want to make the leap to wireless earbuds, I recommend the Jaybird X2. Though if you don’t mind waiting, the Jaybird X3 and Jaybird Freedom are both coming out this spring, and they might be even better.
One final note: I got my last two pairs of wireless earbuds in white so that others can see I’m wearing them. With my original black Bluebuds X, people would sometimes talk at me for >30 seconds without realizing I couldn’t hear them because I had music in my ears.
As far as I can tell, MarginNote is the only iPhone app that lets you annotate & highlight both PDFs and epub files, and sync those annotations to your computer. And by “PDFs and epub files” I basically mean “all text files,” since Calibre and other apps can convert any text file into an epub, except for PDFs with tables and images. (The Kindle iPhone app can annotate text files, but can’t sync those annotations anywhere unless you bought the text directly from Amazon.)
This is important for people who like to read nonfiction “on the go,” like me — and plausibly some of my readers, so I figured I’d share my discovery.
The best plays and films have had great writing for a long time. The best TV shows have had great writing for about a decade now. But the writing in the best videogames is still cringe-inducingly awful. This is despite the fact that videogame blockbusters regularly have production budgets of $50M or more. When will videogames hit their “golden age” (at least, for writing)?
I think I’ve finally realized that I have a favorite kind of music, though unfortunately it doesn’t have a genre name, and it cuts across many major musical traditions — Western classical, jazz, rock, electronica, and possibly others.1
I tend to love music that:2
- Is primarily tonal but uses dissonance for effective contrast. (The Beatles are too tonal; Arnold Schoenberg and Cecil Taylor are too atonal; Igor Stravinsky and Charles Mingus are just right.)
- Obsessively composed, though potentially with substantial improvisation within the obsessively composed structure. (Coleman’s Free Jazz is too free. Amogh Symphony’s Vectorscan is innovative and complex but doesn’t sound like they tried very hard to get the compositional details right. The Rite of Spring and Chiastic Slide and even Karma are great.)
- Tries to be as emotionally affecting as possible, though this may include passages of contrastingly less-emotional music. (Anthony Braxton and Brian Ferneyhough are too cold and anti-emotional. Rich Woodson shifts around too quickly to ever build up much emotional “momentum.” Master of Puppets and Escalator Over the Hill and Tabula Rasa are great.)
- Is boredom-resistant by being fairly complex or by being long and subtly-evolving enough that I don’t get bored of it quickly. (The Beatles are too short and simple — yes, including their later work. The Soft Machine is satisfyingly complex and varied. The minimalists and Godspeed! You Black Emperor are often simple and repetitive, but their pieces are long enough and subtly-evolving enough that I don’t get bored of them.)
Property #2, I should mention, is pretty similar to Holden Karnofsky’s notion of “awe-inspiring” music. Via email, he explained:
One of the emotions I would like to experience is awe … A piece of music might be great because the artists got lucky and captured a moment, or because it’s just so insane that I can’t find anything else like it, or because I have an understanding that it was the first thing ever to do X, or because it just has that one weird sound that is so cool, but none of those make me go “Wow, this artist is awesome. I am in awe of them. I feel like the best parts of this are things they did on purpose, by thinking of them, by a combination of intelligence and sweat that makes me want to give them a high five. I really respect them for their achievement. I feel like if I had done this I would feel true pride that I had used the full extent of my abilities to do something that really required them.”
It’s no accident that most of the things that do this for me are “epic” in some way and usually took at least a solid year of someone’s life, if not 20 years, to create.
To illustrate further what I mean by each property, here’s how I would rate several musical works on each property:
|Tonal w/ dissonance?||Obsessively composed?||Highly emotional?||Boredom-resistant?|
|Mingus, The Black Saint and the Sinner Lady||Yes||Yes||Yes||Yes, complex|
|Stravinsky, The Rite of Spring||Yes||Yes||Yes||Yes, complex|
|The Soft Machine, Third||Yes||Yes||Yes||Yes, complex|
|Schulze, Irrlicht||Yes||I think so?||Yes||Yes, slowly-evolving|
|Adams, Harmonielehre||Yes||Yes||Yes||Yes, complex|
|The Beatles, Sgt. Pepper||Not enough dissonance||Yes||Yes||No|
|Coleman, Free Jazz||Yes||Not really||Sometimes||Yes, complex|
|Amogh Symphony, Vectorscan||Yes||Not really||Yes||Yes, complex|
|Stockhausen, Licht cycle||Too dissonant||Yes||Not often||Yes, complex|
|Autechre, Chiastic Slide||Yes||Yes||Yes||Yes, complex|
|Anthony Braxton, For Four Orchestras||Too dissonant||Yes||No||Yes, complex|
- I haven’t listened to enough non-Western classical or folk musics to know whether this theory of my favorite kind of music holds up across those styles. [↩]
- Note that I like and sometimes love lots of music that doesn’t fit one or more of these criteria (including e.g. Sgt. Pepper), but I think my absolute favorite pieces of music tend to have all these properties. [↩]
I realized recently that when I want to learn about a subject, I mentally group the available books into three categories.
I’ll call the first category “convincing.” This is the most useful kind of book for me to read on a topic, but for most topics, no such book exists. Many basic textbooks on the “hard” sciences (e.g. “settled” physics and chemistry) and the “formal” sciences (e.g. “settled” math, statistics, and computer science) count. In the softer sciences (including e.g. history), I know of very few books with the intellectual honesty and epistemic rigor to be convincing (to me) on their own. David Roodman’s book on microfinance, Due Diligence, is the only example that comes to mind as I write this.
Don’t get me wrong: I think we can learn a lot from studying softer sciences, but rarely is a single book on the softer sciences written in such a way as to be convincing to me, unless I know the topic well already.1
I think of my 2nd category as “raw data.” These books make a good case that the data they present were collected and presented in a fairly reasonable way, and I find it useful to know what the raw data are, but if and when the book attempts to persuade me of non-obvious causal hypotheses, I find the book illuminating but unconvincing (on its own). Some examples:
- The CIA World Factbook 2016
- The Better Angels of Our Nature (excluding the parts on hunter-gatherers)
- The Measure of Civilization
- Human Accomplishment
Finally, my 3rd category for nonfiction is “food for thought.” Besides being unconvincing about non-obvious causal inferences, these books also fail to make a good case that the data supporting their arguments were collected and presented in a reasonable way. So what I get from them is just some basic terminology, and some hypotheses and arguments and stories I didn’t know about before. This category includes the vast majority of all non-fiction, e.g.:
- Psychology Applied to Modern Life
- How the World Works
- Bowling Alone
- Understanding Power
- Guns, Germs, and Steel
- The Shadow World
My guess is that I’m more skeptical than most heavy readers of non-fiction, including most scientists. I’m sure I’ll blog more in the future about why.