OpenAI CEO Sam Altman speaks during the BlackRock Infrastructure Summit on March 11, 2026, in Washington, DC. | Anna Moneymaker/Getty ImagesWhen Sam Altman first told her that he’d never let OpenAI go corporate, that what he and his colleagues were building was too powerful to be driven by investors, Catherine Bracy more or less believed him. The conversation took place in 2022, when Bracy, CEO and founder of the social mobility-focused nonprofit TechEquity, was interviewing Altman for a book she was writing about the dangers of venture capital. It was before Altman’s mysterious firing and unfiring a year later, after which he mostly stopped responding to Bracy’s texts.And ever since then, OpenAI — which was initially founded as a nonprofit in 2015 to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return” — has been publicly trying to escape the confines of its charitable roots. Today, OpenAI contains both a corporate arm focused on building and selling AI and a nonprofit arm with a stated mission of ensuring that AI benefits people. During the controversial process of trying to fully sever the two in 2024, OpenAI lost about half of its AI safety staffers and much of its senior leadership. That was followed by an intensified scrutiny from state attorneys general, nonprofit legal experts, competitor companies, effective altruists, Nobel Prize winners, vast swaths of California’s philanthropic community, and one of its original funders, Elon Musk. Different sides had different interests, but the overall argument was that shifting to a for-profit model would create a fiduciary duty to investors that would inherently clash with its original mission of safety and public benefit.Is OpenAI’s new foundation a $180 billion distraction?Last October, OpenAI agreed to make its nonprofit arm very rich. The OpenAI Foundation is now worth about $180 billion and it has two main objectives:Helping the world adapt to and benefit from AI by giving money to charity.Acting as a moral compass for OpenAI the company, especially when it comes to safety and security decisions.The foundation has already given away about $40.5 million so far, a small fraction of the billions it plans to eventually donate. But critics see the donations as a distraction.While OpenAI says its foundation has the final say on security and safety-related decisions, the company has come under scrutiny in recent months for striking a deal with the Pentagon, fighting against statewide AI legislation, and testing ads for free users.Even if the foundation does eventually give away billions of dollars, it may never be enough to make up for what the public lost in allowing OpenAI to go corporate.Nonetheless, OpenAI did finally strike a contortive restructuring deal last October. Essentially, the for-profit arm became what is known as a public benefit corporation (PBC), called the OpenAI Group. The original nonprofit became the OpenAI Foundation, which has a 26 percent stake currently worth $180 billion in the PBC, plus a sliver of exclusive legal control over certain major decisions. One effect of the transition was that it essentially required OpenAI to put a number on what it owed the public for converting what had been a project for all humanity into something that most directly benefits the company’s investors. The resulting stake of the OpenAI Foundation is big enough to instantly make it one of the wealthiest charities in the country, or in OpenAI’s words, the “best-equipped nonprofit the world has ever seen.” On paper, at least, the foundation is now significantly richer than the entire country of Luxembourg. Even the Gates Foundation has only $77.6 billion in assets, less than half of what the OpenAI Foundation can draw from, though it’s important to note that most of the wealth of the OpenAI Foundation is locked in fairly illiquid shares within the still private company, which limits how quickly any money can be given away. Still, its sheer size means that the OpenAI Foundation stands to eventually be a transformative presence on the philanthropic stage, one way or another. But while OpenAI says the foundation will eventually give out many billions of dollars in philanthropy to ensure that “artificial general intelligence benefits all of humanity,” it’s uncertain that a socially beneficial philanthropy can exist side by side with a company that is fighting an existential battle over who will dominate the AI industry. “The unspoken truth here is that they’re never going to make a decision that is bad for the company,” Bracy said. “These two entities cannot live under the same roof” where “the mission is in control.” (Disclosure: Vox Media is one of several publishers that have signed partnership agreements with OpenAI. Our reporting remains editorially independent.)The foundation’s first gifts came in the form of $40.5 million in no-strings-attached grants to over 200 community nonprofits, like churches, food banks, and afterschool programs. Notably, most grantees had little to no connection to AI or technology — and just as notably, several of these early grantees just so happen to be members of EyesOnOpenAI, a coalition of California nonprofits critical of OpenAI’s privatization that formed in 2025. But there are signs the foundation will soon pivot into grantmaking that’s more obviously relevant to the company’s original charter, which aimed to ensure that the benefits of AI are broadly distributed while also prioritizing long-term safety in the technology’s development. On Feb. 19, OpenAI — the company, not the foundation — announced a $7.5 million grant in conjunction with Microsoft, Anthropic, Amazon, and other major tech companies for a new, international project aimed at researching how to make AI systems safer. “The unspoken truth here is that they’re never going to make a decision that is bad for the company.”Catherine Bracy, TechEquity founder and CEOThe real questions around the OpenAI Foundation have less to do with how much it is giving and to whom than whether it is actually able to carry out its contractual oversight role. In theory, the foundation should be ensuring that OpenAI is the standard-bearer for ethical decision-making at the frontier of AI development. That would be a unique contribution to the field — and an embodiment of OpenAI’s original mission — that no amount of grantmaking could replace. Yet, a series of troubling recent decisions by the company hardly seems to bear out that vision. OpenAI has begun its new corporate journey by debuting ads on its free tier service, firing an executive who raised safety concerns about a soon-to-come NSFW mode for ChatGPT on charges of sexual discrimination against a male colleague, and burning cash while its president funnels millions of dollars into Donald Trump’s super PAC. OpenAI President Greg Brockman has also teamed up with the private equity firm Andreessen Horowitz and Palantir’s co-founders to fund a $125 million super PAC aimed at promoting AI-friendly policies. Along with Google, xAI, and Anthropic, OpenAI has also come under scrutiny in recent weeks for its defense contracts with the Pentagon. When OpenAI succeeded in its campaign to cede its foundational new technology from nonprofit control, it opened the door for many of these decisions. Even $180 billion in charity might not be enough to make up for the difference.How OpenAI shed its nonprofit skinCorporate charity is ubiquitous in the tech world, especially among the biggest players. Microsoft plans to donate $4 billion in cash and AI cloud technology to schools and nonprofits by 2030. Google gives away some $100 million annually, often to organizations focused on artificial intelligence and technology.But from the beginning, OpenAI was different. Rather than making money and giving some of it to charity, OpenAI was the charity. It was founded as a nonprofit research lab with about $1 billion in start-up donations, mostly from tech titans like Altman, Brockman, and Elon Musk. There are some structural advantages to being a charity. You can’t accept investments, but you can accept donations and you don’t have to pay most taxes. What’s more, in those early days, OpenAI’s stated mission — to build safe AI without the pressures of financial incentive — gave it a major boost when it came to recruitment for rarified talent. Machine learning prodigy Ilya Sutskever told Wired in 2016 that he chose to leave Google to become OpenAI’s chief scientist “to a very large extent, because of its mission.” But there were limits to being a fully nonprofit entity. In pursuit of financing amid the rising computing costs of cutting-edge AI, OpenAI created its capped-profit subsidiary in 2019 to manage a new $1 billion investment from Microsoft. Three years later, ChatGPT took the world by storm. Sutskever, and other members of OpenAI’s board, tried and ultimately failed to oust Altman amid accusations of dishonesty in 2023. (Altman denied those accusations.) In 2024 — one year after Sutskever and other members of OpenAI’s board tried and ultimately failed to oust Altman amid accusations of dishonesty — the organization announced its intention to go fully corporate and splinter off the nonprofit into its own fully independent entity. The transition to for-profit “just didn’t smell right,” said Orson Aguilar, head of LatinoProsperity, an economic justice nonprofit and Bracy’s co-leader at EyesOnOpenAI. He wasn’t alone: By early 2025, a dozen former OpenAI employees filed an amicus brief aimed at stopping the conversion because it would “fundamentally violate its mission.” And more than 60 nonprofit, philanthropy, and labor leaders, many of them based in OpenAI’s home state of California, agreed that the attempt to privatize felt unfair given the extent to which the company benefited from its tax-free status during its early development. One surprising thing To grasp what this all means, try thinking of OpenAI’s for-profit arm as an angsty tween and the nonprofit as her well-meaning, but often powerless parent. For years, the tween had been allowed to do her own thing, but only within certain limits — she still had to do her homework and get home by a certain time. Now imagine, she’s sick of having a curfew. “Nobody else has one!” She still lives in her mother’s house, but she wants to follow her own rules.That’s kind of what happened here. Up until now, OpenAI’s for-profit subsidiary had a capped-profit model, meaning there were limits on how much money investors could make. But this new deal paved the way for the for-profit to become a full-time corporate girlie, charitable bylaws be damned. And while OpenAI’s new public benefit corporation still technically exists under the original nonprofit’s control, it mostly follows its own rules. It can raise as much money as it wants and eventually, it will likely go public. But California history did provide some hope that the public might at least get some meaningful benefit from the transition. Back in the 1990s, California’s branch of the health insurer Blue Cross Blue Shield — then a nonprofit called Blue Cross of California — decided to privatize. After some haggling with state regulators, the company agreed to forfeit all of its assets, worth $3.2 billion, to a pair of independent nonprofits in exchange for going private. The result was the California Endowment, which is now the state’s largest health foundation. Many nonprofit leaders in California hoped that OpenAI, which is headquartered in the state, would strike a similar deal, ceding a majority of its assets to a fully independent nonprofit. And those assets were and are enormous.Gary Mendoza, a former state official who oversaw the Blue Cross deal, estimated the OpenAI nonprofit’s rightful assets at over $250 billion, or half the company’s $500 billion worth. “Anything short of 50 percent,” he told the San Francisco Examiner last year, “is a missed opportunity.” And beyond money for the public, assuming the nonprofit kept its shares, it would add up to enough influence to really shape OpenAI’s corporate decision-making at a key moment for the future of artificial intelligence.Given that the OpenAI Foundation ended up with little more than a quarter of the final company, this is obviously not what happened. But EyesOnOpenAI’s years-long lobbying effort was not a total bust. The criticism proved powerful enough that last May, OpenAI was forced to give up on an initial plan to restructure away its nonprofit assets into a new organization wholly disconnected from OpenAI, which would have left the nonprofit with no legal control over the for-profit arm. On paper, the new deal includes some meaningful concessions. It contractually requires the nonprofit mission to come first on safety and security issues, with no regard to shareholder interests. The memorandum also calls on OpenAI to “mitigate risks to teens” specifically. It made the foundation the controlling shareholder of the corporation, affording it the right to appoint corporate directors and oversee critical decisions like a sale.If OpenAI abided by all of its terms and eventually started giving away billions of dollars of philanthropy each year, then the world — or at least California, where many of OpenAI’s grants have been concentrated — could stand to greatly benefit from it. Random acts of corporate kindnessAnd this brings us to the $40.5 million that OpenAI gave to over 200 nonprofits toward the end of last year. Many of these charities applied to the grant with sophisticated ideas around how to help their communities integrate or adapt to AI, though they can ultimately use the grants however they see fit. Among them were public libraries, Boys and Girls Clubs, churches, food banks, and legal aid nonprofits. Coming at a moment when the majority of the country’s nonprofits face existential funding cuts, “it was just the perfect timing,” said Thomas Howard Jr, head of Kidznotes, a North Carolina nonprofit focused on music education that received $45,000 in OpenAI’s first round of grants.“There’s nothing I’ve seen that gives me reassurance that they’ll catch the important safety issues when they come up — or that they’ll be doing a thorough investigation of the grantmaking opportunities.”Tyler Johnston, Midas Project executive directorSo civil society’s fight over the OpenAI transition won at least enough concessions to help these worthy organizations and retain some semblance of nonprofit control over some of the for-profit’s activities. So why do so many people in the philanthropic community remain so negative about the foundation?“I’m all for nonprofits getting money,” said Bracy, the head of TechEquity. “I don’t begrudge any organizations that took the money, but I don’t think it’s some indication that OpenAI is living up to the mission of the nonprofit.”$40.5 million, of course, is only 0.02 percent of the OpenAI Foundation’s on-paper $180 billion windfall. How the foundation will eventually spend the other 99.98 percent remains to be seen, though the foundation has said that at least $25 billion will ultimately go to scientific research and what it’s calling “technical solutions for AI resilience.” The company plans to announce a second wave of grants directed at organizations using AI to work across issues like health in the coming months.“We are doing the important work of engaging with experts, learning from communities, and shaping a point of view of where Foundation investments can make the greatest difference,” the OpenAI Foundation’s board of directors said in response to a request for clarity on where future funding will go. “We look forward to sharing more soon.” But so far, critics remain skeptical. OpenAI has done little to prove that its newfound philanthropy is more than just “a smoke and mirrors show,” argued one member of the Coalition for AI Nonprofit Integrity (CANI) — a coalition composed largely of AI insiders, including former OpenAI employees, furiously opposed to the restructuring. He spoke on the condition of anonymity because he feared retaliation from OpenAI, which has accused CANI of being a front funded by Musk. (CANI has denied receiving any such funds — though not for lack of trying. If you scroll to the bottom of OpenTheft, a website created by CANI, you’ll find a direct plea to Musk for donations.) While a spokesperson for OpenAI said that the foundation is in the process of building a dedicated team, and has sought the input of both nonprofit leaders and experts in how society can adapt to AI, the company has yet to make any major staffing announcements for its grantmaking arm. For now, with the exception of Zico Kolter, the head of the nonprofit’s safety committee, the foundation board still shares the same members as the corporate board, including CEO Sam Altman. The idea is that these board members can put on different hats when meeting about nonprofit versus corporate priorities, asserting the foundation’s oversight when needed. But it has created the appearance of a conflict of interest. When asked for mechanisms and examples for how the foundation has responded to situations where its mission conflicts with shareholder interests, given the overlapping board membership, the spokesperson said that OpenAI has conflict-of-interest policies and governance procedures in place to ensure its directors only consider the mission when they meet, as they regularly do, about nonprofit issues. The company also said the foundation board constantly exercises its oversight role, including for all new major product releases, like the release of GPT‑5.3‑Codex, an advanced agentic coding model, last month. The AI watchdog group the Midas Project, a frequent thorn in OpenAI’s side, accused the company of violating safety standards, an allegation that OpenAI fervently denied. In any case, since the OpenAI Foundation is not a separate entity with its own independent board, some critics have compared it to other feel-good corporate social responsibility ventures, like the McDonald’s Ronald McDonald House, Walmart’s healthy foods program, and Home Depot’s work with veterans.Corporate social responsibility has its place, and it can do real good. But Bracy believes that based on the OpenAI Foundation’s structuring and how they’ve conducted their grantmaking so far, it will probably never fund anything “they see as a threat to the growth of the company,” said Bracy, despite the fact that the need for guardrails on unrestricted AI development featured prominently in the company’s original mission. “They’re going to do what’s best for the bottom line of the for-profit.”Critics like Bracy also doubt the OpenAI Foundation’s other main prerogative, which is to govern all safety and ethics-related issues for the broader organization, including the responsibility to review new products.“Instead of a vehicle to serve humanity, it’s become a vehicle to serve one individual and a few of his friends and investors.”Anonymous member of CANIWhile the nonprofit and its mission do legally retain control over the OpenAI corporation — particularly when it comes to safety issues — that may add up to little, given that the OpenAI Foundation doesn’t seem to be an independently governed foundation. It is not, in fact, even technically a foundation, but a public charity, which means it is not required to pay out a certain percentage of its assets each year under IRS requirements.And while the nonprofit retains significant oversight powers on paper — including the authority to halt AI releases it deems unsafe — in practice, critics say, it’s unclear whether it would ever use them. Increasingly, OpenAI has also been wading into political lobbying efforts that seem at odds with its mission to promote long-term safety in AI development. When California lawmakers were debating SB 53, a law requiring transparency reports from leading AI companies, OpenAI lobbied against it. And the company has come under intense scrutiny in recent weeks for its contract with the Pentagon, which has blacklisted its rival company Anthropic for raising ethical concerns about the use of its technology. Why the fight is not over OpenAI’s new corporate arrangement is very, very new. It’s still possible that OpenAI’s grantmaking arm really does staff up, and the nonprofit builds an independent board that has the power to enforce hard ethical decisions for the company, even when it hurts investors’ returns. “They have a lot of freedom to continue to do good,” said Tyler Johnston, executive director of the Midas Project, but that would require them to “actually shake things up” and “show that they’ve created the scaffolding that will enable them to actualize their mission.”But so far, “there’s nothing I’ve seen that gives me reassurance that they’ll catch the important safety issues when they come up,” he said. “Or that they’ll be doing a thorough investigation of the grantmaking opportunities.”If OpenAI does not abide by the terms of its new contract — if the company, for example, tries to thwart an attempt to roll back a dangerous new tool — then California’s attorney general does have the power to demand answers from the company, and in theory, revisit the agreement’s terms. Beyond the agreement, there are a few quite public means by which OpenAI’s former lovers, skeptics, and nemeses are still trying to press rewind on the restructuring. Chief among them is Elon Musk, OpenAI’s most prominent original donor and co-founder. In between trading embarrassing jabs with Altman on X, Musk took OpenAI to court last year over claims that he was “assiduously manipulated” into donating tens of millions of dollars to a nonprofit research lab that turned into an “opaque web of for-profit OpenAI affiliates.”A judge has found enough cause for the case to proceed to trial this April. Musk is suing for up to $134 billion in damages, though OpenAI has told its investors that it believes it would only be on the hook for Musk’s $38 billion in original donations. OpenAI, for its part, has accused Musk of an “unlawful campaign of harassment.” Meanwhile, CANI is still holding out hope that it can convince the people of California to vote for a hyperspecific ballot measure, the California Charitable Assets Protection Act, which could reverse the decision to allow OpenAI — or any other “organizations developing transformative technologies” — to go corporate.“They’re cutting corners on safety because of the race to artificial general intelligence that they just want to win,” said the member of CANI. “Instead of a vehicle to serve humanity, it’s become a vehicle to serve one individual and a few of his friends and investors.”So maybe the fight over OpenAI’s restructuring isn’t completely over — but it’s probably on its last legs. And if they continue on the same path, it’s unlikely that the public will ever really benefit in the way they ought to, given the charitable benefits OpenAI enjoyed in its early days. At the very least, $40.5 million is just not going to cut it. Even $180 billion might fall far short.“I think it’s them saying, ‘Listen, I dare you to enforce this,’” said Bracy, who believes OpenAI is “banking on the fact that they’re worth almost a trillion dollars, and they have endless resources — and the state of California does not.”