Here’s the main way I think about effective altruism, personally:
- I was born into incredible privilege. I can satisfy all of my needs, and many of my wants, and still have plenty of money, time, and energy left over. So what will I do with those extra resources?
- I might as well use them to help others, because I wish everyone was as well-off as I am. Plus, figuring out how to help others effectively sounds intellectually interesting.
- With whatever portion of my resources I’m devoting to helping others, I want my help to be truly other-focused. In other words, I want to benefit others by their own lights, as much as possible (with whatever portion of resources I’ve devoted to helping others). This is very different from other approaches to helping others, such as helping in a way that makes me feel good (e.g. a cause I have a personal connection to, or “giving back” to a community that has benefited me), or helping specific kinds of people that I feel special empathy for (e.g. identifiable victims, people with whom I share particular characteristics, or people who face particular deprivations that are salient to me), or helping in a way that allows me to achieve particular virtues, or helping in ways that aren’t scope-sensitive (e.g. spending $1 million to save one life via bone marrow transplant rather than spending the same amount to save ~220 lives via malaria prevention). I might do those other things too, but I wouldn’t count them as coming from my budget for other-focused altruism. (See also: Harsanyi’s veil of ignorance and aggregation theorem.)
- Okay, so what can I do that will benefit others by their own lights, as much as possible (with the other-focused portion of my resources)? Here is where things get complicated, drawing from domains as diverse as ethics, welfare economics, consciousness studies, global health, macrohistory, AI, innovation economics, exploratory engineering, and so much more. There will be many legitimate debates, and I’ll never be certain that I’ve come to the right conclusions about how to help others as much as possible, but the goal of all this research will remain the same: to figure out how to benefit others as much as possible and then devote my other-focused resources toward doing that.
In other words, I’m pretty happy with the most canonical definition of effective altruism I know of, from MacAskill (2019), which defines effective altruism as:
(i) the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and
(ii) the use of the findings from (i) to try to improve the world.
This notion of effective altruism doesn’t demand that you use all your resources to help others. It doesn’t even say that you should use your other-focused budget of resources to help others as much as possible. Instead, it merely describes an intellectual project (clause i) and a practical project (clause ii) that some people are excited about but most people aren’t.
Effective altruism is radically different from many other suggestions for what it looks like to do good or help others. True, the portion of resources devoted to helping others may not differ hugely (though it may differ some ) between an effective altruist and a non-EA Christian or humanist or social justice activist, since the canonical notion of effective altruism doesn’t take a stance on what that portion should be. Instead, effective altruism differs from other approaches to helping others via one or more of its defining characteristics, namely its aspiration to be maximizing, impartial, welfarist, and evidence-based.
For example, I think it’s difficult for an effective altruist to conclude that the following popular ideas for how to do good or help others are plausible contenders for helping others as much as possible (in an impartial, welfarist, evidence-based way):
- Providing basic necessities (food, shelter, health care, education) to people who are poor by wealthy-country standards, at a cost that’s ≥100x the cost per person of providing those necessities to people who are poor by global standards. (Not maximizing, not impartial.)
- Funding for the arts. (Not maximizing: there is already more great art than anyone can enjoy in a lifetime, and the provision of marginal artistic experience benefits others much less than e.g. providing the poorest people in the world with basic necessities.)
- Religious evangelism, e.g. to spare souls from hell. (Not evidence-based.)
- Funding advocacy against GMOs or nuclear power. (Not evidence-based.)
- Funding animal shelters rather than efforts against factory farming, which tortures and slaughters billions of animals annually. (Not maximizing.)
- (Many, many other examples.)
Of course, even assuming effective altruism’s relatively distinctive joint commitment to maximization, impartialism, welfarism, and evidence, there will still be a wide range of reasonable debates about which interventions help others as much as possible (in an impartial, welfarist, evidence-based way), just as there will always be a wide range of reasonable debates about any number of scientific questions (and that’s no objection to scientific epistemology).
Moreover, these points don’t just follow from the canonical definition of effective altruism; they are also observed in the practice of people who call themselves “effective altruists.” For example, EAs are somewhat distinctive in how they debate the question of how best to help others (the debates are generally premised on maximization, welfarism, impartialism, and careful interpretation of whatever evidence is available), and they are very distinctive with regard to which causes they end up devoting the most money and labor to. For example, according to this estimate, the top four EA causes in 2019 by funding allocated were:
- Global health ($185 million)
- Farm animal welfare ($55 million)
- (Existential) biosecurity ($41 million) — note this was before COVID-19, when biosecurity was a much less popular cause
- Potential (existential) risks from AI ($40 million)
Global health is a fairly popular cause among non-EAs, but farm animal welfare, (existential) biosecurity, and potential (existential) risks from AI are very idiosyncratic. Indeed, I suspect that EAs are responsible for ≥40% of all funding for each of farm animal welfare, potential existential risks from AI, and existential biosecurity.