OpenAI's Evolution Examined Amid Musk Lawsuit

OpenAI's original purpose has been 'hollowed out', researchers argue.

Elon Musk is suing OpenAI and its chief executive, Sam Altman, alleging the company abandoned the public-interest mission on which it was built. New research suggests Musk has a point, though the full picture is complex.

The article, OpenAI: Governance for Public Good or Private Gain? by Alexandra Andhov (University of Auckland) and Ian Murray (University of Western Australia), traces OpenAI's development.

It follows the company from its origins as an incorporated nonprofit charity, through a capped-profit hybrid model, to its most recent shift to a for-profit, for-purpose public benefit corporation.

When OpenAI was set up, Andhov says its nonprofit status was designed to remove the profit motive from developing powerful AI, signalling to researchers, regulators and the public that safety and social benefit would take precedence over returns.

"Our paper analyses OpenAI's trajectory from non-profit to possibly a really substantial IPO later this year and shows how their non-profit mission has eroded over time," she says.

The paper finds safeguards were put in place at each stage to protect OpenAI's purpose. Yet, those safeguards faced pressure from a small group of key funders, commercial relationships, and the need for capital. As a result, the researchers note a gradual 'mission drift': a shift away from OpenAI's original purpose toward commercial priorities.

"In effect, the soul of the OpenAI mission has been hollowed out and housed elsewhere..."

Professor Alexandra Andhov University of Auckland, Buiness School

OpenAI's purpose, the researchers argue, is directly concerned with how AI is developed and distributed, not merely with what 'good works' might be funded from the proceeds. This matters because its most recent restructuring involved a partial shift toward becoming a grant-making foundation, with large sums earmarked for areas including 'curing diseases', raising further mission drift concerns.

Andhov points to a 2023 boardroom crisis, when OpenAI's nonprofit board fired Altman, only to reinstate him days later under investor pressure (predominantly from Microsoft), as evidence that formal governance powers are only as effective as the board exercising them.

The researchers argue genuine independence between those overseeing OpenAI and those running it day to day remains unmet, limiting the ability to resist commercial influence.

"The safeguards are there on paper, but they're not strong enough," says Andhov.

The article draws a distinction between OpenAI's founding legal purpose, as set out in its certificate of incorporation, and its broader public mission statements. Conflating the two, the researchers argue, has obscured how far the drift from its original purpose has actually gone.

The paper points to OpenAI's agreement to supply AI to the United States Department of War as evidence that shifting governance focus toward revenue increases the risk of mission drift.

Professor Alexandra Andhov
Professor Alexandra Andhov is the inaugural Chair in Law and Technology and Director of the Center for Advancing Law and Technology Responsibly (ALTeR) at the University of Auckland.

The researchers also argue that OpenAI's conversion in 2025 to a Public Benefit Corporation, which saw it shift its for-profit subsidiary into a structure designed to balance profit generation with its original mission, amounted to "little more than a structural exercise".

"The company's founding mission of developing AI for the benefit of humanity has not been preserved within the operating entity itself, but instead quietly migrated to the OpenAI Foundation, a separate body with no direct control over the commercial operations now driving the company toward a public offering later this year, which could reach hundreds of billions of dollars," says Andhov.

"In effect, the soul of the OpenAI mission has been hollowed out and housed elsewhere, while the acting company pursues the very trajectory its founders once pledged to resist."

Meanwhile, in the Musk vs OpenAI case, lawyers for OpenAI have rejected Musk's allegations, arguing that no promises were ever made to keep the company a nonprofit in perpetuity.

Andhov, however, suggests the lawsuit may prove more consequential than OpenAI's legal team anticipates.

"Beyond governance questions, the litigation could open a broader line of scrutiny, potentially exposing not only the alleged misrepresentations made to early donors and partners, but also significant violations of copyright law and other legal obligations that have, until now, escaped serious judicial examination."

/University of Auckland Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.