Music
Music I most enjoyed discovering this quarter:
- I Like to Sleep: Sleeping Beauty (2022)
- Quadratum [Unlucky Morpheus]: “The Dance of Eternity” (2021)
- Mason Bates: Piano Concerto (2022)
Music I most enjoyed discovering this quarter:
Music I most enjoyed discovering this quarter:
Music I most enjoyed discovering this quarter:
Music I most enjoyed discovering this quarter:
by Luke 4 Comments
Here’s the main way I think about effective altruism, personally:
In other words, I’m pretty happy with the most canonical definition of effective altruism I know of, from MacAskill (2019), which defines effective altruism as:
(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and
(ii) the use of the findings from (i) to try to improve the world.
This notion of effective altruism doesn’t demand that you use all your resources to help others. It doesn’t even say that you should use your other-focused budget of resources to help others as much as possible. Instead, it merely describes an intellectual project (clause i) and a practical project (clause ii) that some people are excited about but most people aren’t.
Effective altruism is radically different from many other suggestions for what it looks like to do good or help others. True, the portion of resources devoted to helping others may not differ hugely (though it may differ some ) between an effective altruist and a non-EA Christian or humanist or social justice activist, since the canonical notion of effective altruism doesn’t take a stance on what that portion should be. Instead, effective altruism differs from other approaches to helping others via one or more of its defining characteristics, namely its aspiration to be maximizing, impartial, welfarist, and evidence-based.
For example, I think it’s difficult for an effective altruist to conclude that the following popular ideas for how to do good or help others are plausible contenders for helping others as much as possible (in an impartial, welfarist, evidence-based way):
Of course, even assuming effective altruism’s relatively distinctive joint commitment to maximization, impartialism, welfarism, and evidence, there will still be a wide range of reasonable debates about which interventions help others as much as possible (in an impartial, welfarist, evidence-based way), just as there will always be a wide range of reasonable debates about any number of scientific questions (and that’s no objection to scientific epistemology).
Moreover, these points don’t just follow from the canonical definition of effective altruism; they are also observed in the practice of people who call themselves “effective altruists.” For example, EAs are somewhat distinctive in how they debate the question of how best to help others (the debates are generally premised on maximization, welfarism, impartialism, and careful interpretation of whatever evidence is available), and they are very distinctive with regard to which causes they end up devoting the most money and labor to. For example, according to this estimate, the top four EA causes in 2019 by funding allocated were:
Global health is a fairly popular cause among non-EAs, but farm animal welfare, (existential) biosecurity, and potential (existential) risks from AI are very idiosyncratic. Indeed, I suspect that EAs are responsible for ≥40% of all funding for each of farm animal welfare, potential existential risks from AI, and existential biosecurity.
by Luke 2 Comments
Music I most enjoyed discovering this quarter:
Spotify playlist for this quarter is here. Playlists for past quarters and years here.
Music I most enjoyed discovering this quarter:
bold = especially excited
Spotify playlist for this quarter is here. Playlists for past quarters and years here.
Music I most enjoyed discovering this quarter:
bold = especially excited
by Luke 6 Comments
Over the years, my colleagues and I have spoken to many machine learning researchers who, perhaps after some discussion and argument, claim to think there’s a moderate chance — a 5%, or 15%, or even a 40% chance — that AI systems will destroy human civilization in the next few decades. However, I often detect what Bryan Caplan has called a “missing mood“; a mood they would predictably exhibit if they really thought such a dire future was plausible, but which they don’t seem to exhibit. In many cases, the researcher who claims to think that medium-term existential catastrophe from AI is plausible doesn’t seem too upset or worried or sad about it, and doesn’t seem to be taking any specific actions as a result.
Not so with Elon Musk. Consider his reaction (here and here) when podcaster Joe Rogan asks about his AI doomsaying. Musk stares at the table, and takes a deep breath. He looks sad. Dejected. Fatalistic. Then he says:
I tried to convince people to slow down AI, to regulate AI. This was futile. I tried for years. Nobody listened. Nobody listened. Nobody listened… Maybe [one day] they will [listen]. So far they haven’t.
…Normally the way regulations work is very slow… Usually it’ll be something, some new technology, it will cause damage or death, there will be an outcry, there will be an investigation, years will pass, there will be some kind of insight committee, there will be rulemaking, then there will be oversight, eventually regulations. This all takes many years… This timeframe is not relevant to AI. You can’t take 10 years from the point at which it’s dangerous. It’s too late.
……I was warning everyone I could. I met with Obama, for just one reason [to talk about AI danger]. I met with Congress. I was at a meeting of all 50 governors, I talked about AI danger. I talked to everyone I could. No one seemed to realize where this was going.
Moreover, I believe Musk when he says that his ultimate purpose for founding Neuralink is to avert an AI catastrophe: “If you can’t beat it, join it.” Personally, I’m not optimistic that brain-computer interfaces can avert AI catastrophe — for roughly the reasons outlined in the BCIs section of Superintelligence ch. 2 — but Musk came to a different assessment, and I’m glad he’s trying.
Whatever my disagreements with Musk (I have plenty), it looks to me like Musk doesn’t just profess concern about AI existential risk. I think he feels it in his bones, when he wakes up in the morning, and he’s spending a significant fraction of his time and capital to try to do something about it. And for that I am grateful.
Spotify playlist for this quarter is here. Playlists for past quarters and years here.
Music I most enjoyed discovering this quarter:
bold = especially excited
by Luke 8 Comments
Bostrom (2014) worries about an AI “treacherous turn”:
…one idea for how to ensure superintelligence safety… is that we validate the safety of a superintelligent AI empirically by observing its behavior while it is in a controlled, limited environment (a “sandbox”) and that we only let the AI out of the box if we see it behaving in a friendly, cooperative, responsible manner. The flaw in this idea is that behaving nicely while in the box is a convergent instrumental goal for friendly and unfriendly AIs alike. An unfriendly AI of sufficient intelligence realizes that its unfriendly final goals will be best realized if it behaves in a friendly manner initially, so that it will be let out of the box. It will only start behaving in a way that reveals its unfriendly nature when it no longer matters whether we find out; that is, when the AI is strong enough that human opposition is ineffectual.
Some people have told me they think this is unrealistic, apparently even for a machine superintelligence far more capable than any current AI system. But I learned today that this has already happened at least once before — twenty years ago!
As part of a project studying the evolution of (simulated) organisms, computer scientist Charles Ofria wanted to limit the replication rate of a digital organism. So, he programmed the system to pause after each mutation, measure the mutant’s replication rate in an isolated test environment, and delete the mutant if it replicated faster than its parent. However, the organisms evolved to recognize when they were in the test environment and “play dead” (pause replication) so they would not be eliminated and instead be kept in the population where they could continue to replicate outside the test environment. Once he discovered this, Ofria randomized the inputs of the test environment so that it couldn’t be so easily detected, but the organisms evolved a new strategy, to probabilistically perform tasks that would accelerate their replication, thus slipping through the test environment some percentage of the time and continuing to accelerate their replication thereafter.
Spotify playlist for this quarter is here. Playlists for past quarters and years here.
Music I most enjoyed discovering this quarter:
bold = especially excited
by Luke 4 Comments
Most people around the world — except for residents of a handful of competent countries such as New Zealand, Vietnam, and Rwanda — have now spent an entire year watching their government fail miserably to prepare for and respond to a very predictable (and predicted) pandemic, for example by:
My friend and colleague Daniel Dewey recently noted that it seems like private actors could have greatly mitigated the impact of the pandemic by creating in advance a variety of “state capacity backups,” i.e. organizations that are ready to do the things we’d want governments to do, if a catastrophe strikes and government response is ineffective.
A state capacity backup could do some things unilaterally (e.g. stockpile and ship masks), and in other cases it could offer its services to governments for functions it can’t perform without state sign-off (e.g. setting up vaccination facilities).
I would like to see more exploration of this idea, including analyses of past examples of privately-provided “state capacity backups” and how well they worked.
Let’s say you want to know how likely it is that an innovative new product will succeed, or that China will invade Taiwan in the next decade, or that a global pandemic will sweep the world — basically any question for which you can’t just use “predictive analytics,” because you don’t have a giant dataset you can plug into some statistical models like (say) Amazon can when predicting when your package will arrive.
Is it possible to produce reliable, accurate forecasts for such questions?
Somewhat amazingly, the answer appears to be “yes, if you do it right.”
Prediction markets are one promising method for doing this, but they’re mostly illegal in the US, and various implementation problems hinder their accuracy for now. Fortunately, there is also the “superforecasting” method, which is completely legal and very effective.
How does it work? The basic idea is very simple. The steps are:
Technically, the usual method is a bit more complicated than that, but these three simple steps are the core of the superforecasting method.
So, how well does this work?
A few years ago, the US intelligence community tested this method in a massive, rigorous forecasting tournament that included multiple randomized controlled trials and produced over a million forecasts on >500 geopolitical forecasting questions such as “Will there be a violent incident in the South China Sea in 2013 that kills at least one person?” This study found that:
Those are pretty amazing results! And from an unusually careful and rigorous study, no less!
So you might think the US intelligence community has eagerly adopted the superforecasting method, especially since the study was funded by the intelligence community, specifically for the purpose of discovering ways to improve the accuracy of US intelligence estimates used by policymakers to make tough decisions. Unfortunately, in my experience, very few people in the US intelligence and national security communities have even heard of these results, or even the term “superforecasting.”
A large organization such as the CIA or the Department of Defense has enough people, and makes enough forecasts, that it could implement all steps of the superforecasting method itself, if it wanted to. Smaller organizations, fortunately, can just contract already-verified superforecasters to make well-calibrated forecasts about the questions of greatest importance to their decision-making. In particular:
These companies each have their own strengths and weaknesses, and Open Philanthropy has commissioned forecasts from all three in the past couple years. If you work for a small organization that regularly makes important decisions based on what you expect to happen in the future, including what you expect to happen if you make one decision vs. another, I suggest you try them out. (All three offer “conditional” questions, e.g. “What’s the probability of outcome X if I make decision A, and what’s the probability of that same outcome if I instead make decision B?”)
If you work for an organization that is very large and/or works with highly sensitive information, for example the CIA, you should consider implementing the entire superforecasting process internally. (Though contracting one or more of the above organizations might be a good way to test the model cheaply before going all-in.)
Spotify playlist for this quarter is here. Playlists for past quarters and years here.
Okay, music I most enjoyed discovering this quarter:
bold = especially excited