OpenAI's 13-page policy paper proposes taxing AI profits, a public wealth fund, and a 4-day workweek. Here is what it means for business and governance.
Who Writes the Rules for Superintelligence? Inside OpenAI's Proposed New Deal
OpenAI recently published a 13-page policy paper that deserves more attention from business leaders than it has received. The document is not a speculative essay or a public relations exercise. It is a concrete policy blueprint - one that proposes restructuring how governments tax economic activity, how AI systems are audited, and how the gains from artificial intelligence are distributed across society. For executives tracking the long-term regulatory environment, this paper is a planning document, not background reading.
CEO Sam Altman frames the current moment as equivalent to the Industrial Revolution - a period that required a comparable reset of social institutions, tax systems, and labor arrangements. That framing is deliberate. The New Deal did not emerge from a think tank. It emerged from a recognition that existing institutions were no longer adequate for the economy that had actually arrived. OpenAI is making a similar argument, and doing so with unusual specificity.
What OpenAI Actually Proposed - And Why It Matters Now
The paper covers several distinct but connected ideas. On the economic side, it proposes shifting taxation away from labor income and toward AI-generated profits and computational output. On the governance side, it calls for signed provenance logs that track model actions, a standardized audit framework that independent researchers and regulators could use to evaluate AI behavior consistently, and low-cost access to foundational compute for governments and research institutions.
The provenance and audit proposals mirror how financial institutions are regulated - treating AI systems less like consumer software and more like regulated utilities. This is a significant conceptual shift. If AI becomes infrastructure as vital as electricity or telecommunications, the argument goes, it should be governed like infrastructure.
The economic centerpiece is a public wealth fund modeled on Alaska's Permanent Fund, which distributes oil revenues as annual dividends to state residents. The analogy is instructive: Alaska decided that a natural resource generating massive private wealth should also generate a direct public return. OpenAI is proposing the same logic applied to AI-driven productivity gains. The paper also situates a four-day workweek within this framework - not as a workplace perk, but as a structural response to reduced per-worker hours needed across the economy.
The Core Tension - Taxing Compute Instead of Labor
The underlying economic argument is straightforward and serious. Tax systems worldwide are built on the assumption that humans perform most economically valuable work. Income taxes, payroll taxes, and social insurance contributions all flow from human labor. As AI systems displace that labor, governments face a structural revenue problem - the activity that generates value increasingly falls outside the tax base that funds public services.
OpenAI's proposed fix pivots taxation onto AI-generated profits and the computational output that drives them. This is not a small adjustment. It would require fundamental changes to how income, profit, and value creation are defined in tax law. Companies that currently benefit from automating labor-intensive tasks at scale should begin modeling scenarios where that productivity advantage comes with a new tax treatment. The question is not whether this kind of policy will emerge - it is how quickly, and in what form.
The four-day workweek proposal reinforces this logic. If AI absorbs a growing share of economically productive work, the distribution of that work across the remaining human workforce becomes a policy question, not just a human resources question. HR and workforce planning teams that treat this as a benefits conversation are reading it at the wrong level.
The Legitimacy Problem - And Why It Cannot Be Dismissed
The most serious criticism of this paper is structural, not technical. A company proposing the regulatory framework for its own technology is a fundamental conflict of interest, regardless of the quality of the ideas involved. History offers little comfort here. Industry-led frameworks have a consistent track record of favoring incumbents, slowing meaningful reform, and gradually absorbing the regulatory bodies meant to oversee them.
The EU AI Act is the most comprehensive AI legislation enacted to date, but it was designed before the current wave of agentic AI systems - models that take sequences of autonomous actions across tools and environments. The legislation's scope does not map cleanly onto the capabilities that are now being deployed commercially. This gap between legislative timelines and technological development is precisely the opening that industry-shaped frameworks tend to fill.
The realistic near-term outcome is a hybrid arrangement: technology companies shape the technical standards, legislators set the redistribution mechanisms, and civil society organizations push for enforcement. OpenAI's paper is best understood as an opening position in that negotiation - not a final answer, and not a neutral one. For businesses, the relevant question is not whether this specific document becomes law. It is whether the direction it points toward becomes the regulatory horizon. The evidence suggests it will.
What Business Leaders Should Do With This Now
Even without legislation, a policy paper from the company building the most widely deployed AI systems in the world signals regulatory direction. Companies that plan ahead will have a structural advantage over those that wait for binding rules. The time between a policy proposal and its legislative adoption is exactly when preparation creates value.
Several practical steps follow from taking this document seriously. First, finance and strategy teams should begin modeling scenarios where AI-driven productivity gains face new tax treatment - either on profits, on compute, or on some hybrid measure yet to be defined. Second, companies that rely heavily on AI-driven labor substitution should assess their social license to operate in an environment where public wealth fund proposals are gaining mainstream policy traction. Third, early movers in voluntary AI auditing and provenance tracking may find regulatory advantages as mandatory frameworks emerge - the same dynamic that played out with carbon accounting before mandatory disclosure rules arrived.
The fundamental question this paper raises is not whether AI will change governance. It is whether the people building AI will be the ones who define what governance looks like. That question has significant consequences for every business operating in an AI-shaped economy - and waiting for the answer to arrive fully formed is not a strategy.
