Is effective altruism (EA) an opportunity or an obligation? My sense is that Peter Singer, Oxford EAs, and Swiss EAs tend to think of EA as a moral obligation, while GiveWell and other Bay Area EAs are more likely to see EA as a (non-obligatory) exciting opportunity.
In the Harvard Political Review, Ross Rheingans-Yoo recently presented the “exciting opportunity” flavor of EA:
Effective altruism [for many] is an opportunity and a question (I can help! How and where am I needed?), not an obligation and an ideology (You are a monster unless you help this way!), and it certainly does not demand that you sacrifice your own happiness to utilitarian ends. It doesn’t ask anyone to “give until it hurts”; an important piece of living to help others is setting aside enough money to live comfortably (and happily) first, and not feeling bad about living on that.
I tend to think about EA from the “exciting opportunity” perspective, but I think it’s only fair to remember that there is another major school of thought on this, which does argue for EA as a moral obligation, ala Singer’s famous article “Famine, Affluence, and Morality.”
Niel Bowerman says
I think that if there is a trend for the European effective altruism community to believe in moral realism and thus feel morally obliged to help others, it is probably because the community initially grew largely out of philosophy academia. However as the movement is growing rapidly, perhaps a majority in the wider community now have not come from an academic philosophy background now and so see effective altruism instead as an exciting opportunity to have a large impact. Indeed my impression is that this is the angle that William MacAskill’s upcoming book is going to be emphasising. It is also my personal stance.
Ben Pace says
I think that for some people the moral obligation position is most persuasive (those of a highly philosophical bent) and for others the exciting opportunity position is het most persuasive. From either perspective, knowing which of the two is most likely to convince your target audience is the feature that should be emphasised in a given situation.
I imagine that the position likely to persuade more people is the exciting opportunity one, because more people are motivated by exciting things than philosophically sound propositions.
Will Lugar says
Given my current metaethics, I tend to think we can properly say a good act is “obligatory” if we get the best results by saying that, and we can properly say it is “supererogatory” (or, “an exciting opportunity”) if we get the best results by saying that.
What constitutes “best results” is some balance of maximizing charitable giving while also avoiding the induction of excess guilt/scrupulosity.
The appropriateness of moral judgments and principles seem to be contingent on good results in some sense, rather than on independent criteria. My view is very much a work-in-progress and is some combination of rule utilitarianism, constructivism, goal theory, and desirism.
Ben Todd says
I think a significant fraction of the Oxford EAs, and perhaps a majority, see it as exciting opportunity rather than moral obligation.
Max Ra says
William MacAskill and Sam Harris discuss this for 10 minutes in a podcast: https://youtu.be/PxStuUxaZxQ?t=10m9s.
MacAskill argues, that it can be seen as *both* an obligation and an opportunity. I understood that the obligation framing should for practical purposes leave a lot of room for self-care, personal wellbeing, etc.
Furthermore, Brian Tomasik argues in a piece on the demandingness objection that obligation is not even philosophically necessary for a utilitarianism-motivated altruism:
“Utilitarianism should not be seen as a binary morality in which you’re right if you do the best possible thing and wrong otherwise. Rather, utilitarianism should be regarded more like a point counter in a video game, where you aim to accumulate as many points as you can within the bounds of reason. There’s no binary “right” and “wrong”. You just do the best you can.
Relatedly, the idea of a “moral obligation” is not intrinsic to utilitarianism. Talk about “duties” and “requirements” is a way humans communicate when they want to motivate others strongly to perform some action.
Thus, to call someone “morally blameworthy” unless she gives up her family and friends to devote her life to reducing suffering is a self-defeating strategy.
If imagined excessive duties prevent you from accepting utilitarianism, those excessive duties were not a utilitarian recommendation to begin with. Rather, you’re making an error.”