Longtermism originated from the Effective Altruism (EA) movement, which was inspired by utilitarian principles and founded by Oxford philosophers Toby Ord and William MacAskill. EA initially focused on alleviating poverty in the global South and improving the treatment of animals in factory farms, aiming to direct charitable giving to maximize good per unit of time or money spent. Over time, EA gained significant financial support and influence, channeling hundreds of millions of dollars annually.
As EA grew, Ord and MacAskill began promoting longtermism, a view shared by some effective altruists and members of Oxford University’s Future of Humanity Institute. Longtermism posits that humanity faces a critical juncture: it might either self-destruct or realize a brilliant future. To ensure humanity's continued existence, longtermists prioritize mitigating existential risks—catastrophic threats to human civilization’s survival. These include AI misaligned with liberal values and deadly engineered pathogens. By addressing these risks, longtermists hope to enable humans (or their digital descendants) to thrive for billions of years, even colonizing exoplanets.
Ord and MacAskill each published books advocating longtermism, which received significant media attention and endorsements, notably from Elon Musk. However, the movement faced scrutiny after the collapse of the crypto exchange FTX (CEO Sam Bankman-Fried), a major funder of longtermist initiatives. The bankruptcy highlighted financial ties between longtermist leaders and questionable sources of wealth, fueling criticism of the movement's moral integrity.
Critics argue that longtermism inherently supports existing political and economic structures, overlooking their role in generating the very suffering it seeks to alleviate. Unlike traditional effective altruism, which addresses current human and animal suffering, longtermism focuses on the potential wellbeing of trillions of future humans, advocating that existential threats to humanity's continuation be a moral priority. This perspective can lead to dismissing short-term suffering if the long-term benefits seem substantial enough.
Longtermists argue that natural existential threats like asteroids or super-volcanoes are less concerning than anthropogenic ones like AI or pandemics. Critics claim this focus is theoretically unjustified and morally harmful, diverting attention from urgent current issues and perpetuating harmful socio-economic structures. Despite its financial success, longtermism's moral vision is seen as questionable, building on EA's utilitarian roots while aligning with problematic socio-economic systems. It’s argued that longtermism ignores revolutionary movements that have long fought for a just and livable future.
The movement faces backlash for its ties to FTX, but it remains influential and well-funded. The critique urges a deeper examination of longtermism’s theoretical weaknesses and material harms, highlighting its methodological assumptions and consequences. Longtermism's ethical framework is grounded in consequentialism, prioritizing outcomes that maximize wellbeing. This approach often relies on a supposedly impartial "view of the universe" to assess wellbeing across time and space. Longtermism applies these ideas to population ethics, debating how best to value the potential wellbeing of future humans.
Longtermism’s uniqueness lies in two claims: first, that humanity stands at a historically significant moment with both immense potential and risk; second, it rejects the intuition that the number of people is morally neutral, arguing that each additional person who enjoys adequate wellbeing improves the world. This view suggests that preventing human extinction is a monumental moral achievement, warranting extreme measures if necessary. Critics note that this perspective can dangerously justify short-term harm for long-term gain, echoing the reasoning of despotic regimes.
Prominent EA figure Peter Singer critiques longtermism, doubting humanity’s historical uniqueness and emphasizing current suffering. He suggests removing longtermism’s focus on existential risk, instead advocating a more general population ethics that still addresses future human welfare. Critics argue that longtermism, even without its emphasis on existential threats, fails to address structural injustice or recognize the contributions of social movements to creating a better future.
Effective altruism’s reliance on welfarism and a "god's eye" moral method makes it politically conservative, neglecting systemic issues and weakening political bodies capable of challenging structures that perpetuate suffering. This critique extends to longtermism, whose abstract moral calculations obscure injustices crucial to understanding right action. Critics argue that longtermism’s fascination with existential risk distorts its environmental and social priorities.
MacAskill and Ord's discussions on climate change focus on technological solutions compatible with existing economic systems, ignoring the need for substantial social change and new values. This approach downplays the urgency of environmental issues, which disproportionately affect marginalized groups, and reinforces longtermism’s disregard for current suffering. Longtermism’s prioritization of extinction risks over pressing environmental concerns reveals a flawed methodology that fails to acknowledge ongoing struggles for social change.
Longtermism’s rise in philanthropy highlights concerns about wealthy private foundations wielding undue influence over social issues. Its emphasis on existential risk diverts attention from present suffering while protecting harmful socioeconomic mechanisms from criticism. Despite its flaws, longtermism has successfully attracted financial support, presenting its wealthy backers as saviors rather than contributors to systemic injustice.