Here’s the main way I think about effective altruism, personally:
- I was born into incredible privilege. I can satisfy all of my needs, and many of my wants, and still have plenty of money, time, and energy left over. So what will I do with those extra resources?
- I might as well use them to help others, because I wish everyone was as well-off as I am. Plus, figuring out how to help others effectively sounds intellectually interesting.
- With whatever portion of my resources I’m devoting to helping others, I want my help to be truly other-focused. In other words, I want to benefit others by their own lights, as much as possible (with whatever portion of resources I’ve devoted to helping others). This is very different from other approaches to helping others, such as helping in a way that makes me feel good (e.g. a cause I have a personal connection to, or “giving back” to a community that has benefited me), or helping specific kinds of people that I feel special empathy for (e.g. identifiable victims, people with whom I share particular characteristics, or people who face particular deprivations that are salient to me), or helping in a way that allows me to achieve particular virtues, or helping in ways that aren’t scope-sensitive (e.g. spending $1 million to save one life via bone marrow transplant rather than spending the same amount to save ~220 lives via malaria prevention). 1 I might do those other things too, but I wouldn’t count them as coming from my budget for other-focused altruism. (See also: Harsanyi’s veil of ignorance and aggregation theorem.)
- Okay, so what can I do that will benefit others by their own lights, as much as possible (with the other-focused portion of my resources)? Here is where things get complicated, drawing from domains as diverse as ethics, welfare economics, consciousness studies, global health, macrohistory, AI, innovation economics, exploratory engineering, and so much more. There will be many legitimate debates, and I’ll never be certain that I’ve come to the right conclusions about how to help others as much as possible, but the goal of all this research will remain the same: to figure out how to benefit others as much as possible and then devote my other-focused resources toward doing that.
In other words, I’m pretty happy with the most canonical definition of effective altruism I know of, from MacAskill (2019), which defines effective altruism as:
(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and
(ii) the use of the findings from (i) to try to improve the world.
This notion of effective altruism doesn’t demand that you use all your resources to help others. It doesn’t even say that you should use your other-focused budget of resources to help others as much as possible. 2 Instead, it merely describes an intellectual project (clause i) and a practical project (clause ii) that some people are excited about but most people aren’t. 3
Effective altruism is radically different from many other suggestions for what it looks like to do good or help others. 4 True, the portion of resources devoted to helping others may not differ hugely (though it may differ some 5 ) between an effective altruist and a non-EA Christian or humanist or social justice activist, since the canonical notion of effective altruism doesn’t take a stance on what that portion should be. 6 Instead, effective altruism differs from other approaches to helping others via one or more of its defining characteristics, namely its aspiration to be maximizing, impartial, welfarist, and evidence-based. 7
For example, I think it’s difficult for an effective altruist to conclude that the following popular ideas for how to do good or help others are plausible contenders for helping others as much as possible (in an impartial, welfarist, evidence-based way):
- Providing basic necessities (food, shelter, health care, education) to people who are poor by wealthy-country standards, at a cost that’s ≥100x the cost per person of providing those necessities to people who are poor by global standards. (Not maximizing, not impartial.)
- Funding for the arts. (Not maximizing: there is already more great art than anyone can enjoy in a lifetime, and the provision of marginal artistic experience benefits others much less than e.g. providing the poorest people in the world with basic necessities.)
- Religious evangelism, e.g. to spare souls from hell. (Not evidence-based.)
- Funding advocacy against GMOs or nuclear power. (Not evidence-based.)
- Funding animal shelters rather than efforts against factory farming, which tortures and slaughters billions of animals annually. 8 (Not maximizing.)
- (Many, many other examples.)
Of course, even assuming effective altruism’s relatively distinctive joint commitment to maximization, impartialism, welfarism, and evidence, there will still be a wide range of reasonable debates about which interventions help others as much as possible (in an impartial, welfarist, evidence-based way), just as there will always be a wide range of reasonable debates about any number of scientific questions (and that’s no objection to scientific epistemology). 9
Moreover, these points don’t just follow from the canonical definition of effective altruism; they are also observed in the practice of people who call themselves “effective altruists.” For example, EAs are somewhat distinctive in how they debate the question of how best to help others (the debates are generally premised on maximization, welfarism, impartialism, and careful interpretation of whatever evidence is available), and they are very distinctive with regard to which causes they end up devoting the most money and labor to. For example, according to this estimate, the top four EA causes in 2019 by funding allocated were: 10
- Global health ($185 million)
- Farm animal welfare ($55 million)
- (Existential) biosecurity ($41 million) 11 — note this was before COVID-19, when biosecurity was a much less popular cause
- Potential (existential) risks from AI ($40 million)
Global health is a fairly popular cause among non-EAs, but farm animal welfare, (existential) biosecurity, and potential (existential) risks from AI are very idiosyncratic. Indeed, I suspect that EAs are responsible for ≥40% of all funding for each of farm animal welfare, potential existential risks from AI, and existential biosecurity. 12Footnotes:
- My estimate of the cost of a bone marrow transplant is taken from Millman (2020). My estimate for the cost of saving lives via malaria prevention is $4,500, which is GiveWell’s estimate for the average cost-effectiveness of GiveWell-directed funding to Malaria Consortium in 2020, taken from this page on June 4th 2022. I might be misunderstanding the cost estimate in Millman (2020), though in any case I suspect that basic point will stand, that paying for bone marrow transplants will save much fewer lives per dollar than funding malaria prevention via donations to Malaria Consortium.
- See MacAskill (2019), section 2.
- You could call this a “watered down” or “weak” version of moral realist utilitarianism, but it is not a watered down or weak version of effective altruism (as suggested by Nielsen). It is the primary, canonical notion of effective altruism, crafted with input from a survey of effective altruism “thought leaders” from a few years ago.
- Contra Srinivasan and Nielsen.
- It’s hard to get comparable numbers on this (or any numbers at all), but my anecdotal sense is that highly engaged EAs are substantially more likely to choose a “direct work” EA career than similarly engaged members of most other morally-motivated communities are, though of course it’s not at all rare in other morally-motivated communities. EAs might also on average donate a bit more than most other morally-motivated communities, though it’s worth noting that e.g. the median EA (in this survey) donates much less than 10% of their income. If anyone has numbers for any of these claims, let me know!
Some EAs embrace a more morally demanding version of EA, even though strong moral demandingness is not part of the canonical definition of EA. I applaud and respect these EAs and think they are morally superior to me, but my sense is that they are in the minority of EAs, and of course many other morally-motivated communities also have a minority of practitioners who embrace an especially morally demanding lifestyle.
- See again MacAskill (2019), section 2.
- MacAskill (2019) uses the phrase “science-aligned” rather than “evidence-based,” and notes that effective altruism’s impartialism and welfarism is “tentative.”
- For context, Lewis Bollard estimates that “US animal rescue shelters had a combined budget of $3.2B in 2021. Shelters house 6.3M animals/year, so that’s ~$500 spent per shelter animal (often for a short period of time). US farm animal advocacy orgs had a combined budget of ~$100M, while the US has 2.7B farm animals alive at any time, so that’s $0.04 spent per farm animal.”
- Indeed, after making a small number of moral assumptions, questions about which interventions will help others the most just are, in a broad sense, scientific questions.
- See the same post for estimates of how much EA labor goes to different causes; that picture is harder to describe succinctly so I’ve skipped it here.
- By “existential biosecurity” I just mean “Biosecurity and pandemic preparedness interventions focused on avoiding existential catastrophe from pathogens,” which is generally the sort of biosecurity work that EAs fund.
- This is harder to determine for the case of existential biosecurity, since existential biosecurity interventions overlap a lot with lower-stakes biosecurity interventions, and biosecurity in general has become a more popular cause since COVID-19. On farm animal welfare, Lewis Bollard estimates that EA funders were responsible for perhaps ~45% for farm animal advocacy work in 2021, though Leah Edgerton’s estimate of 25% for 2018 may have been correct (EA funding in this area has increased in recent years).