In July of 2015, an organizer of the Effective Altruism (EA) Global Conference told the attendees “There’s one thing that I have in common with every person in this room. We’re all trying really hard to figure out how to save the world.” This quote is representative of the core principle of EA, that individuals should do as much good as possible in an effective way. Yet the term Effective Altruism is difficult to define due to it being an umbrella term. EA encompasses a meta-charity, a new philosophical approach , a social movement , a community and more with the core principle being the common factor. The concept of EA is aptly deemed laudable by numerous people, notwithstanding that many of EA’s critics have found EA to be dubious at best and detrimental at worst. EA’s critiques are primarily directed at the movement’s key players, for example, William MacAskill and Peter Singer, and it’s primary institutions such as Giving What We Can (GWWC), GiveWell and 80,000 hours etc. Much of this is due to the perceived deviation in execution that the movement has taken from its core tenets as EA has essentially ignored addressing issues of injustice and oppression. Moreover, the leaders behind EA refuse to acknowledge the qualitative evidence that proves structural violence is to blame for the world’s misfortunes. It is evident that while Effective Altruists are quick to defend their ideology by countering many of its different critiques , in actuality EA still falls short of proving its critics wrong. Due to this, Srinivasan’s critiques of EA are relevant and fair. In this paper, I argue that the movement of Effective Altruism utilizes flawed frameworks for measuring good and effectiveness, has a constituency that further narrows the movements’ focus, and that EA runs the risk of creating more harm than good.
In order to determine how to make the greatest impact, EA meta-charities such as GiveWell and GWWC garner evidence and reason via a heuristic framework. . The framework filters through different causes and programs to find issues that are great in scale, highly neglected, and highly solvable to maximize effectiveness. This model emphasizes cost-effectiveness and uses Quality Adjusted Life Years (QALYs) as an impact measurement. Having these types of frameworks for delineating charities is helpful, yet many important and deserving causes and programs fall to the wayside for not meeting certain criteria. Additionally, EA’s evaluation method brings up the questions of how can good be measured quantitatively and then be compared to good between different charities with different causes, and what evidence is cherry-picked to make these comparisons. If the inputs and outputs of a charity aren’t quantifiable then a charity can’t be endorsed by GiveWell. Charities with larger overhead budgets are less cost-effective and less likely to be suggested by EA regardless of how the overhead is spent. Causes that pertain to systemic issues aren’t easily quantifiable in scale or neglect, and aren’t highly resolvable in EA’s view.
Generally, a quantitative and cost-effective framework automatically limits the scope on the types of charities EA supports. If one goes on GiveWell’s website, all of the charities that are listed as priorities address the issues of Malaria, Deworming, and Vitamin A Supplementation; the exception to this is GiveDirectly which gives donations directly to the aid recipients. EA is achieving its goal by alleviating suffering and reducing the premature loss of life by supporting these programs, yet they don’t acknowledge the systemic sources of these issues. In this sense, criticisms of EA being methodologically blind aren’t wrong.
In addition to the framework, many of the EA stakeholders and participants have influenced the process of adopting causes. GWWC was developed by Oxford philosophy graduate students, and GiveWell sprung out of Silicon Valley . As such, many of EA’s followers consist of young, white, male, educated, analytic philosophers, and workers in the tech industry. As EA grew the idea of doing good expanded from focusing on individual donations fighting global poverty to guiding people into choosing careers that would allow one to make a bigger impact with 80,000 hours, and with the Open Philosophy Project (OPP) branching out of GiveWell and delving into research for how to make effective large scale changes. As this growth occurred EA started paying a lot of attention to Global Catastrophic Risks (GCRs) or Existential Risks (X-risks) and potentially working on their prevention. GCRs are events that could potentially cause human extinction, such as Nuclear War, Global Pandemics, and Artificial Intelligence (AI) which is the one that EA pays the most attention to. Many EA stakeholders are afraid that a huge X-risk is AI becoming so advanced that it will cause human extinction. The justification for focusing on GCRs, unlikely albeit potential problems, is simple math. According to Effective Altruists, there is more value in saving billions of lives sometime in the future from a threat that has a minuscule chance of occurring as opposed to saving present-day global citizens who are suffering, regardless of the ability to do anything about the GCRs or not.
EA does offer additional arguments for supporting X-risks on its causes page by explaining “risks may become substantial in the decades to come” and they cite a Global Catastrophic Risks survey to back up this claim. Yet the survey shows the opposite, which is that most of these risks will not have substantial impacts should they happen. Moreover, EA and other proponents of the mitigation of X-risks conflate their support of the cause to insurance as we pay for protection from unlikely deleterious events on an individual level and EAs thinks we should do so on a global collective level . The frequent excuse for supporting AI and other X-risks is that while the threats are unlikely to occur, they’re highly plausible and highly damaging should they occur. Those two facts override the unlikeliness factor according to EAs.
Today, every facet of EA has at least in part a focus on GCRs. The OPP supports GCRs with focus areas of “Biosecurity and Pandemic Preparedness” and “Potential Risks from Advanced Artificial Intelligence” While Biosecurity risks are more probable to occur in the near future, more funding is going to AI research. This is heavily due to the type of people who are part of the movement. EA’s constituents, mostly young, white, well educated, men in the tech industry, are convinced that AI is a huge obdurate threat to humanity and researching it will save the world. The majority of the EA Global Conference focused on X-risks– more specifically those concerning AI– with the main event being a talk between tech giants such as Elon Musk and EA leaders. EA and OPP are partnered with the Future of Life Institute (FLI), which focuses on safeguarding the future of humanity specifically with AI. Musk donated 10 million to FLI 3 years ago. When deciding the importance of the world’s problems and how to solve them, there should be more than one type of person at the table to do so effectively. This has been a large criticism of EA. It’s apparent that the work EA and its sub-entities are focusing on is fueled by the views and ideas of their members rather than solely the goal of effectively helping people.
Additionally, EA seems to function as if capitalism is a given. This also makes sense considering the background of its supporters. The other charities GiveWell and EA support, such as Against Malaria and Give Directly, are criticized as band-aid solutions as they only address the symptoms of the systemic problems that feed into them. Critiques of EA go further to argue that the movement perpetuates the problems they’re trying to fix at the systemic level.
EA’s blindness to these systemic issues allows it to contribute to the problems caused by capitalism, as these problems aren’t measured in analyzing the effectiveness of one of its programs. EA encourages individuals to donate money so charities can buy supplies that will be used to save lives. However, the capitalist source that provides the supplies usually profits by withdrawing more resources from a developing country to make those supplies than the amount of aid that ends up being invested into the country.
What all this would indicate to an outsider is that the heuristic process used by GiveWell can be adjusted to justify the will of the donors and stakeholders to achieve their end goal, and moreover, that these meta-charities still answer to what the donors want. X-risks, especially those concerning AI do not meet the three criteria GiveWell initially established for its framework. An AI Apocalypse is not a large-scale problem as it has yet to happen, it is neglected because it is an unlikely phenomenon, and it is not easily resolvable. More than just narrowing the scope of what GiveWell and EA consider, the constituents have veered EA towards hypocrisy.
Many of EA’s critics have disavowed it so much so it seems that the eradication of EA is preferable than addressing its issues. And while EA has a myriad of issues, this type of aid cannot simply cease and desist. EA as a movement has alleviated suffering, mitigated premature deaths, and saved countless lives. Furthermore, in response to the band-aid commentary, Peter Singer reminds the critics that we still don’t always know the systemic roots of the issues critics are addressing and that if we did know we won’t always know how to respond, so in these cases treating the symptoms of the world’s problems with a band-aid is the best we can do. Singer goes on to say that EA can’t be invalidated with evidence of a more effective approach for increasing good because EA is dynamic and by definition would quickly adopt the strategy. Singer’s points are consistent with what EA has set out to do. Additionally, concerning the issue of band-aid solutions, OPP, which branched out of GiveWell, has focus areas of U.S. policy, scientific research, and global health & development aside from GCRs. Moreover, OPP’s approach to philanthropy is more oriented towards high-risk, high reward philanthropy to attack systemic issues but also supports short term, evidence-backed, low risk giving like that of GiveWell. Considering this, EA doesn’t seem as bad as its critics make it out to be.
Yet, there’s an apparent disconnect between what the different key players of EA envision the movement to be and what is happening in reality. Singer did have a point about how a band-aid may be better than nothing at all yet that does not negate the fact that each band-aid used enables more wounds to appear with the aid of capitalism, and that by continuing to do so EA does the opposite of what it preaches. Regarding EA’s move towards incorporating systemic change via OPP, in 2017 GiveWell and OPP divided into two separate organizations as they were structurally different. EA’s dynamic ability theoretically allows it to adapt and change, however in practice it chooses not to.
25.2.2019