In a surprising turn of events, OpenAI’s board abruptly fired co-founder and CEO Sam Altman on Friday. Following a backlash on social media, the board appeared to be reconsidering its decision over the weekend, only to confirm that Altman was out early Monday morning. In turn, Altman will be leading a new AI research lab at Microsoft
Altman has become the public face of the AI movement, thanks to ChatGPT’s massive success. His removal means chaos in the short term for OpenAI and others in the industry. The real story however may be the OpenAI board’s concerns about “AI safety,” which in turn stem from the outsized influence of Effective Altruism in Silicon Valley. The safety of AI likely created a key rivet between the board and CEO Altman.
EA is a philosophical framework rooted in utilitarianism, which aims to maximize the net good in the world. In theory, there’s little to dislike about EA, with its rationalist approach to philanthropy that emphasizes evidence over emotion. The problem is the movement’s leaders are all too prone to ethical lapses, confirming the very worst stereotypes of the movement’s critics.
For example, Sam Bankman-Fried, the disgraced FTX founder and devoted effective altruist, showed how EA’s “earning to give” philosophy—which promotes making a lot of money so that one can later give it away to charity—can easily turn into “earning at any cost,” even if it means defrauding millions of investors in the process. Similarly, EA leader and philosopher Peter Singer recently defended human-animal sexual relations on the social media app X, highlighting the movement’s creepy connections to the most perverse corners of intellectual libertinism.
While the current crop of EA leaders may be comprised of fraudsters and crackpots, utilitarianism has a long and storied history that has at times included great philosophers like Jeremy Bentham and John Stuart Mill. Utilitarianism sees the collective interest as superseding that of the individual, like sacrificing a single healthy person to harvest their organs and save five others.
The OpenAI board is comprised of current and former effective altruists, and debates over AI safety likely contributed to Altman’s removal, highlighting how the EA-friendly board was in tension with the business savvy and mostly profit-seeking Altman. OpenAI awkwardly straddles sectors, technically a non-profit, but with responsibilities to earn profits for some investors, like Microsoft. After launching ChatGPT and working with Microsoft, the board may have thought OpenAI had strayed too far from the nonprofit’s original mission of open and safe AI.
But even early AI safety proponents like technologist Nick Bostrom now avoid the extreme predictions that set off the doomers in the first place. Bostrom—who promotes “longtermism,” another key EA concept—apparently doesn’t want to associate himself with bloggers like Eliezer Yudkowsky, who predict the end of the world on a near-hourly basis.
Ultimately, the nonprofit, EA-influenced arm of OpenAI won out, but the company may well be destroyed in the process. Along with Altman, the president of OpenAI Greg Brockman and a number of top researchers have already fled the organization. The trickle may soon be a flood.
The whole episode demonstrates how nonprofits, which depend on fickle directors, are often the ones that most stray from the public good, making rash decisions based on short-sighted impulses and bruised egos. Meanwhile, for-profits at least have a solid grounding in seeking to protect the investments of their shareholders. This focus on returns is like a compass that keeps for-profits guided toward their missions.
Effective altruism is a poor replacement as a lodestar guiding the non-profit sector. The good aspects of EA, like its emphasis on evidence-based solutions, are not novel, and indeed there are plenty of good alternatives to EA that are more attractive in this regard. The bad aspects, meanwhile, appear irredeemable.
EA’s leaders have demonstrated that they are willing to defraud investors, push the boundaries of civilized behavior, and wreck some of society’s most innovative companies, all if it conforms with whatever myopic vision of the good is in their heads at a particular moment.
Far from being a longtermist worldview, EA is a short-sighted one. OpenAI’s board is far from the worst of EA’s practitioners. Nevertheless, this weekend’s events capture how the movement tends to elevate people with serious blind spots to positions of prominence and influence.
Too many effective altruists are willing to resort to depravity if they believe it will do good over the long term. But what kind of precedent does this set? Why should we expect future effective altruists won’t sink to similar depths, if all one needs is to concoct some self-serving justification to do evil?
The problem with utilitarianism more broadly as a philosophy is it incomplete. Doing the most overall good provides significant guidance, but it can’t be the whole story. Sacrificing one’s self for the long-run interests of a society cannot be the only principle upon which a society is built. Not only is this a recipe for misery, it is contrary to basic human nature. Self-interest, for better or worse, must also at some point enter the expected value calculation.
While OpenAI currently leads the race in AI, expect new leaders to emerge given the company’s internal turmoil. But the biggest bet should be against EA. However reasonable some aspects of it may be, the charlatans the movement attracts should give us serious reservations about its moral authenticity. Too many of the tech industries’ worst promote EA, revealing a rot that eats away at the heart of one of America’s most innovative sectors.
With Altman’s ouster, it’s clear that EA’s corrupting influence has infected even admired companies like OpenAI. If OpenAI represents Silicon Valley’s moral compass, it appears we are all in for some rough waters ahead.
Read the full article here