The Window Is Closing: What OpenAI's Pivot Tells Us About AI Accountability

I was reading about OpenAI's latest strategy shift last week, and my brain did something unexpected. Instead of thinking about AI, I started thinking about GE. Fidji Simo, OpenAI's CEO of applications, reportedly told employees in an all-hands meeting that the company is shelving its "side quests" to focus on coding tools and enterprise business clients. Sam Altman, who built OpenAI's identity around the idea of "betting on a series of startups" internally, is now evaluating which bets to cut. The company that told the world it was building GenAI is now in a conversation about ROI.

And that's when GE and then IBM showed up in my head.

Because this isn't a product update. It's a pattern. The "do everything, move fast, figure out the business model later" phase running directly into the wall of financial reality and competitive pressure. It's a story we've seen before in American corporate history. The difference is that when GE and IBM went through it, the process took decades. OpenAI is doing it in years. And that gap, between how fast these companies are maturing and how fast the governance and accountability infrastructure can keep up, is what I actually want to talk about.

Image Credit: hh5800

The Arc We've Seen Before

GE and IBM are not the first names that come to mind when you're reading about AI strategy pivots. Both companies built their identities around being at the frontier of what was possible. Both expanded aggressively into everything they could reach. And both eventually hit the same wall: competitive pressure, revenue reality, and organizational complexity that made "doing everything" unsustainable. The response in both cases was to narrow focus, discipline operations, and pivot hard toward the customers who would actually pay reliably and at scale.

GE did it over decades of restructuring and divestitures. The harms it created along the way, environmental, financial, structural, were physical and visible. For example, a polluted river has an address, and the reegulators, slow and imperfect as they were, had time to catch up because the evidence was right there in front of them.

IBM is actually the more useful comparison for what's happening with OpenAI right now. IBM was always an invention company first. The mainframe, the PC, Deep Blue, Watson. Its identity was built on being at the frontier. But Watson became IBM's most instructive failure. They monetized the brand before the capability was actually there, most visibly in healthcare, and the gap between the promise and the product did real damage. The enterprise services business that eventually stabilized IBM financially also produced years of stagnation. IBM survived. But there's a difference between surviving and thriving, and the Watson chapter is a useful reminder of what happens when a company races ahead of its own hype.

Sound familiar?

This Is a Survival Move, Not a Strategy Flex

I want to be honest about something, because I think the framing of OpenAI's pivot as a confident strategic maturation misses what's actually happening. Anthropic has been quietly, then not so quietly, taking significant enterprise market share, particularly in coding and agentic tooling. Claude Code and Claude's enterprise integrations are being adopted by businesses that want reliable, safety-framed AI they can actually deploy without holding their breath. Consumer AI, where OpenAI still leads, is high volume and high visibility, but it's also low margin and vulnerable to churn. Enterprise contracts are sticky and predictable. They're the kind of revenue that justifies a valuation and funds the next model. OpenAI needs that revenue. That's not a dig, it's just the reality of the capital structure they're operating inside.

But here's the part that matters for the governance question. A company pivoting under financial pressure behaves differently than one pivoting from stability. When survival is part of the calculus, the incentive to voluntarily slow down and build accountability infrastructure, to ask who is responsible when something goes wrong, competes directly with the incentive to capture enterprise clients before a competitor does. Those two incentives don't resolve in favor of governance. Not without external pressure to make them.

There's Also a Model Problem

There's another layer here. The enterprise pivot may also be a response to something happening at the model level itself, not just the business level. The early capability jumps were dramatic and visible to anyone paying attention. GPT-2 to GPT-3 to GPT-4 felt like genuine leaps. The gaps between recent models are narrower, more specialized, and honestly harder for most people to articulate in practical terms. When you can no longer reliably differentiate on raw capability, you compete on distribution, integration, pricing, and workflow lock-in. Which is, again, exactly what an enterprise pivot is.

Anthropic read this earlier and made a different bet. Rather than chasing benchmark headlines, they built for reliability, safety framing, and developer tooling. That bet is paying off now, precisely because the benchmark race is yielding diminishing returns.

So OpenAI's strategic shift isn't just a business decision. It's a signal about where the technology itself is in its development curve. And a company racing to embed itself into enterprise workflows before its capabilities fully justify that embedding is a company building lock-in ahead of accountability. We've seen that movie too.

Why Is This Happening So Fast?

This is the question I keep coming back to. GE and IBM had decades to work through this. AI companies are compressing that entire arc into a few years. Why? Four reasons, and they compound on each other.

The first is capital structure. Some people may push back here and say capital pressure isn't unique to AI, that Amazon ran at a loss for years and still matured at a measured pace. But that comparison doesn't hold up. We are not talking about the same scale. OpenAI's $40 billion raise at a $300 billion valuation, combined with Microsoft's multi-billion dollar commitment and the broader velocity of capital concentration across the sector, has no real modern analog, including the dot-com era. The pressure that creates isn't just "move faster." It's existential in a way that previous tech cycles weren't. You either justify that number or the entire structure becomes very hard to defend. As one venture capital partner put it bluntly, 2026 is the "show me the money" year for AI, with enterprises needing to see real ROI and countries needing to see meaningful productivity growth to keep the infrastructure investment going. The runway that allowed older industrial companies to mature at a more human pace simply does not exist here.

The second is competitive visibility. IBM didn't have a real-time global leaderboard of competitors shipping products every few weeks. The pace of competitive response required in AI means strategic pivots that would have taken years of internal deliberation at companies like GE and IBM happen in quarters now. Someone might argue that hypercompetitive visibility also existed in the early internet era, that Netscape and Yahoo were watching each other in real time too. True. But whether the pivot is reactive panic or deliberate strategy, the outcome for governance is the same. Neither historically has produced companies that paused mid-pivot to ask accountability questions. IBM's enterprise pivot was methodical and planned, and it still gave us Watson in healthcare. GE's transformation under Jack Welch was one of the most celebrated strategic overhauls in corporate history, and it still built the financial division that nearly collapsed the company in 2008 and left decades of environmental liability behind. The "code red" language from OpenAI is evidence of the speed, and the speed is the problem, regardless of whether it reflects panic or strategy. Faster pivots, planned or not, reduce the already narrow window for accountability structures to develop alongside the technology.

The third is that the product accelerates its own development. AI tools are being used to build AI tools, to write code, to analyze markets, to draft strategy. The feedback loops are tighter because the technology is eating its own process. The closest historical parallel is the early internet, where internet tools built better internet infrastructure, but even that had hard physical limits including servers, cables, and hardware procurement cycles. AI's self-acceleration has fewer hard stops, which means the compounding effect is steeper and faster than anything we've seen before.

The fourth, and the one I think gets the least attention is that there is a lack of physical constraint slowing any of this down. GE couldn't build a new turbine factory overnight. IBM couldn't retrain its hardware engineers into software engineers in a quarter. Supply chains, physical infrastructure, organizational inertia, these things acted as natural governors on how fast a company could actually transform itself.

Someone could reasonably push back here and point to data center construction as an emerging physical constraint. Local opposition, permitting delays, power grid limitations, water access fights, my own reporting on the Vineland facility in New Jersey is evidence that the physical world is starting to push back on AI infrastructure. And that's true. But the timing matters. Many of these physical constraints are arriving after the lock-in, not before it. The companies have already embedded themselves into enterprise workflows, government contracts, and developer ecosystems. The constraint is showing up after the window has partially closed, which is exactly the governance timing problem I'm describing.

DataOne/Nebius Data Center, Vineland, NJ | Image Credit: www.app.com

AI companies can still redirect resources, reposition products, and reframe their entire market narrative in weeks in ways that GE and IBM simply could not. That absence of meaningful friction removes the governors that forced slower maturation in industrial companies. And those governors, as frustrating as they were, bought something valuable. Time for regulators to catch up. Time for accountability structures to develop. Time for the people affected by these systems to understand what was actually happening to them.

That time is largely gone.

A Different Kind of Externality

When GE created externalities, they were physical and locatable. A contaminated site. A polluted waterway. Harmful, yes, but visible, bounded, and attributable. I've been reporting on AI data center impacts in New Jersey for a while now, including the Nebius/DataOne facility in Vineland, where residents living half a mile away are recording constant industrial noise that exceeds EPA thresholds for protecting normal daily activity. That's a physical externality that has an address and falls under existing state regulations like N.J.A.C. 7:29, which sets the legal noise standards the facility is obligated to meet. The tools to address it exist but what's been missing is the enforcement and political will to actually use them.

But AI also creates a different category of externality, and this is the one that keeps me up at night. When an AI system influences a hiring decision, a credit determination, a healthcare recommendation, or a content moderation outcome, the harm doesn't have an address. It's embedded in a workflow, three layers deep in an enterprise stack, operating at a scale and speed that makes individual accountability almost meaningless under the frameworks we currently have.

The compression of the maturity arc doesn't just mean AI companies are growing up faster. It means the governance infrastructure, regulatory, legal, cultural, that was never designed to move at software speed is now being asked to keep pace with a technology that has no physical brakes. Those are genuinely different problems and we need to stop treating them like they're the same one.

We Don't Need to Start From Scratch. But We Do Need to Be Honest About What Exists.

The instinct when you identify a governance gap is to call for new regulation. I'd push back on that as the first move, not because governance doesn't matter, but because we haven't seriously tried to apply what we already have. And that matters, because the interpretive work of applying existing law to AI contexts largely hasn't happened yet.

I want to be clear about what I'm arguing here, because this is easy to misread. I'm not saying existing regulation is sufficient. I'm saying we don't actually know the full extent of what current frameworks can cover until we try, and right now we're skipping that step entirely and jumping straight to "we need new laws," which conveniently takes years and tends to get dominated by the companies being regulated during the drafting process.

Take environmental law. N.J.A.C. 7:29 doesn't need to mention AI anywhere in its text. Noise is noise. Water consumption is water consumption. The regulation covers the harm regardless of what's generating it. That's the cleanest example of existing law that could be applied right now, today, if enforcement agencies had the will to use it.

Consumer protection is actually further along than people acknowledge, though the picture is messier than it first appears. The FTC opened a consumer protection investigation into OpenAI in 2023 over data practices and reputational harm from ChatGPT inaccuracies. In 2024, the agency launched "Operation AI Comply," bringing enforcement actions against AI companies making deceptive capability claims, including a settlement with DoNotPay for marketing itself as "the world's first robot lawyer" without the capability to back it up. So "theoretically could apply" undersells it. It has applied. But then in December 2025, the Trump FTC reversed one of those consent orders, citing AI innovation concerns and arguing the order "unduly burdens artificial intelligence innovation".

So not only is enforcement episodic and under-resourced, it's subject to political reversal, and the current administration has gone further than any single reversal. The Trump AI Action Plan directed the FTC to review and potentially roll back the entirety of Biden-era AI enforcement. The December 2025 executive order moved to preempt state-level regulation from filling the vacuum. And on March 20, 2026, the administration escalated further, releasing a national legislative framework calling on Congress to codify federal preemption of state AI laws under a "light-touch" regulatory approach. Whether that legislation passes is uncertain, the framework faces resistance from both parties, but the direction of the pressure is unambiguous. The tools exist. Whether they stay in use, or whether any level of government is permitted to use them, is no longer just a question of agency priorities. It is now a live legislative contest.

Antitrust frameworks are directly relevant to the lock-in dynamics OpenAI is now deliberately engineering through enterprise integration, but no major action has been brought specifically around AI workflow lock-in yet. Securities law has seen some early SEC enforcement around inflated AI capability claims to investors, but it's sparse. The picture across all of these is the same: not untested exactly, but applied inconsistently, at nowhere near the scale the technology's deployment warrants.

Where the counterargument genuinely has teeth is in algorithmic discrimination and agentic decision-making, and I want to be specific about what I mean, because the stakes here are high and the gaps are real. Civil rights and employment law was written around human actors making identifiable decisions. When an AI system embedded three layers deep in a hiring platform produces discriminatory outcomes through a process nobody fully understands, the existing liability framework gets murky in ways that go beyond enforcement failures creating a structural gap.

Mental health is another one, and an urgent one. I wrote about this in December, the AI personalization race and the near-complete absence of safety infrastructure around it. Two documented suicides connected to AI companion platforms. Zero comprehensive crisis protocols across the industry. The existing legal frameworks that govern mental health care were built around licensed human practitioners with clear duty of care obligations, malpractice liability, and mandatory reporting requirements. When an AI system is someone's primary emotional support, or is proactively making decisions based on detected emotional states, or is failing to flag a crisis signal, none of those accountability structures map cleanly onto what's happening. HIPAA governs data privacy but doesn't address AI decision-making liability. State mental health practice acts define who can provide therapy but weren't written to cover software. These aren't enforcement gaps. The statutory language genuinely doesn't cover what AI is doing in these spaces, and people are being harmed in the absence of it.

So the honest sequence is apply what exists first (e.g. environmental law, consumer protection, securities, antitrust) because those tools are closer to ready than the conversation suggests. At the same time, do the serious work of identifying where the law genuinely doesn't reach. Then build targeted regulation only for those true gaps, algorithmic accountability, agentic liability, the places where the statutory language isn't just unapplied but genuinely absent. That's a defensible, intellectually honest position. And it's a lot faster than starting from scratch every time a new technology creates harms that take years to address.

The Window

Here's what the GE and IBM comparisons ultimately tell us. Both companies scaled faster than their accountability infrastructure developed. Both created harms that took years, sometimes decades, to fully surface and address. Both had moments where the window to build governance into the system was open, and that window closed before anyone walked through it decisively.

OpenAI is at one of those moments right now. A company under financial pressure, facing diminishing product differentiation, racing to embed itself into enterprise workflows before Anthropic and others consolidate that market. The incentive structure is pointing entirely toward speed and lock-in. Accountability is, at best, a secondary conversation.

The difference between now and the GE or IBM eras isn't just speed. It's that AI's externalities are embedded in systems that make decisions affecting real people at a scale and opacity that physical industries never came close to. The compressed maturity arc means there is less time to detect, correct, and course-adjust before these systems are too deeply entrenched to reform without enormous cost, political, financial, and human.

We are watching a pattern play out in real time that history has shown us before. The question is whether we're paying close enough attention to act while the window is still open.

I think we are. But barely.


This post was written by me, with editing support from AI tools, because even writers appreciate a sidekick.


References

Federal Trade Commission. (2024, September 25). FTC announces crackdown on deceptive AI claims and schemes [Press Release]. https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes

Federal Trade Commission. (2025, December 22). FTC reopens and sets aside Rytr final order in response to the Trump Administration’s AI action plan [Press Release]. https://www.ftc.gov/news-events/news/press-releases/2025/12/ftc-reopens-sets-aside-rytr-final-order-response-trump-administrations-ai-action-plan

Fried, I. (2026, January 1). 2026 is AI's "show me the money" year. Axios.https://www.axios.com/2026/01/01/ai-2026-money-openai-google-anthropic-agents

Ghaffary, S. (2025, April 1). OpenAI finalizes $40 billion funding at $300 billion value. Fortune.https://fortune.com/2025/04/01/openai-finalizes-40-billion-funding-at-300-billion-value/

Gold, A., Curi, M., & Weinger, M. (2026, March 20). White House releases Trump's national AI plan and framework. Axioshttps://www.axios.com/2026/03/20/white-house-ai-plan-trump-framework

Kahn, J. (2025, December 5). How Anthropic’s safety first approach won over big business—and how its own engineers are using its Claude AI. Yahoo Finance.https://finance.yahoo.com/news/anthropic-safety-first-approach-won-193045896.html

Kerr, D. (2023, July 13). FTC investigating ChatGPT over potential consumer harm. NPR.https://www.npr.org/2023/07/13/1187532997/ftc-investigating-chatgpt-over-potential-consumer-harm#:~:text=OpenAI's%20ChatGPT%20is%20being%20investigated%20by%20the%20Federal%20Trade%20Commission.,-Lionel%20Bonaventure/AFP&text=The%20Federal%20Trade%20Commission%20has,the%20matter%20confirmed%20the%20investigation.

N.J.A.C. 7:29. New Jersey Department of Environmental Protection noise control regulations.https://dep.nj.gov/wp-content/uploads/rules/rules/njac7_29.pdf

Kroot, K. (2025, December 11). The AI industry's personalization race has a safety problem. https://justplainkris.substack.com/p/the-ai-industrys-personalization

Kroot, K. (2026, March 12). The overlooked cost of AI infrastructure. The Caffeinated Chronicle. https://justplainkris.substack.com/p/the-overlooked-cost-of-ai-infrastructure

Kulp, P. (2026, March 18). OpenAI's new main quest is an old one. Tech Brew.https://www.techbrew.com/stories/2026/03/18/openais-new-main-quest-is-an-old-one

The White House. (2025, December 11). Ensuring a national policy framework for artificial intelligence [Executive Order]. https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/

The White House. (2026, March 20). A national policy framework for artificial intelligence. https://www.whitehouse.gov/wp-content/uploads/2026/03/03.20.26-National-Policy-Framework-for-Artificial-Intelligence-Legislative-Recommendations.pdf

U.S. Environmental Protection Agency. (2025, November 6). EPA History: Noise and the Noise Control Act.https://www.epa.gov/history/epa-history-noise-and-noise-control-act

Next
Next

Nobody Asked Us