The bears have spreadsheets that say OpenAI can't possibly recoup its capex. The bulls have benchmarks that say frontier AI now beats human experts on most professional work. Both are correct. Both are answering the wrong question. Here's the question that actually matters for your portfolio.
If you've been reading about AI investing in 2026, you've absorbed some version of this argument: a small number of companies are spending ungodly sums on infrastructure they cannot possibly recoup. The math is brutal. The receipts are public. Here are the load-bearing facts.
These are real revenue numbers. They are also a fraction of what either company has committed to spend on compute over the coming decade. The arithmetic produces predictable bear takes:
Independent analyst Ed Zitron calls this the "subprime AI crisis" — arguing the cheap-token era is an illusion funded by hyperscaler cross-subsidies that cannot survive a downturn. Cory Doctorow goes further, alleging that the standard 5-year GPU depreciation schedule on hyperscaler balance sheets constitutes accounting fraud given that GPUs burn out in 2-3 years under intensive training loads.
Mark Cuban put it bluntly on Big Technology Podcast in May 2026:
And then there's the inconvenient question of whether AI actually works at scale. MIT's Project NANDA "GenAI Divide" report (July 2025) found that 95% of enterprise AI projects fail to deliver measurable ROI; only 5% of custom enterprise AI tools reach production. Anthropic's own Economic Index (March 2026) shows the gap between what AI could theoretically do and what it is actually doing — a yawning chasm in most occupational categories.
Independent AI analyst Alberto Romero, looking at the same Anthropic chart, summarized:
Now flip the lens. The same period that produced the bear arithmetic above also produced the most rapid capability and adoption gains in the history of any technology. These facts do not contradict the bear case. They sit alongside it.
Even the bears acknowledge the underlying technology is delivering value. From inside Oaktree itself:
JPMorgan's Jamie Dimon, in his April 2024 shareholder letter and at Davos 2026, has been clear:
Stanley Druckenmiller, who famously trimmed his Nvidia position in early 2024, separated capability from timing in a way that matters here:
Even Cuban — the same Cuban who said the labs will never recoup the capex — is unequivocal on what AI is doing:
Read the bear quotes again. Now read the bull quotes again. Notice anything?
The bear case is fundamentally about lab equity valuation — whether OpenAI at $850 billion or Anthropic at $380 billion can produce enough cash to repay their compute commitments before investors run out of patience.
The bull case is fundamentally about AI deployment — whether the technology genuinely changes economic activity, displaces labor, and reshapes industries.
These are two different questions with two different answers. Most public commentary treats them as one. They are not.
Cuban himself, in the same interview, holds both at once without contradiction:
That's the entire framework in one sentence. Bullish on deployment, bearish on lab equity recoupment, no contradiction.
A portfolio positioned for "AI is overhyped" can be wrong about deployment while right about lab equity. A portfolio positioned for "AI changes everything" can be right about deployment while wrong about lab equity. Sizing the right bet requires separating them.
Once you accept that lab equity is a separate question from AI deployment, the next puzzle becomes obvious. If the bear arithmetic is correct — and it is — why does the funding flow continue? Why hasn't the smart money exited?
Answer: the smart money is exiting. Continuously. You're just not seeing the mechanism.
Public bear analysts apply discounted cash flow models to OpenAI and Anthropic and conclude — correctly — that the math doesn't work. But sophisticated capital isn't doing DCF on these companies. They're option-pricing them as binary calls on transformative AI.
At any reasonable valuation, you only need to assign single-digit-percentage probability that the labs will capture some material fraction of a multi-trillion-dollar economic surplus to make the math work. That's the same playbook biotech investors use. Most clinical trials fail. The winners pay for everything.
This explains why the labs behave the way they do. Sam Altman's "all over the map" leadership style isn't a bug. It's the only behavior consistent with capturing option value. Anything signaling near-term profitability discipline would abandon the option premium that justifies the funding flow. As Cuban put it:
Translation: option-pricing logic stated colloquially.
Microsoft's investment in OpenAI is not a $13 billion bet on OpenAI's success. It is largely a passthrough — Microsoft gives OpenAI dollars; OpenAI gives those same dollars back to Microsoft as Azure cloud revenue, at high margin. Specific passthrough percentages widely cited in the press (around 96%) require primary verification, but the directional pattern is well-documented across industry coverage.
The same structure applies to Amazon-Anthropic. Anthropic is contractually obligated to spend much of Amazon's investment back on AWS Bedrock infrastructure.
What does this mean? Microsoft's actual cash exposure to OpenAI is rounding error. They got: a defensive moat against AWS, customer mindshare ("Microsoft is where you do AI"), the option to absorb OpenAI if it works, and time to build their own internal models. The equity stake is gravy. Even if OpenAI's equity goes to zero tomorrow, Microsoft has already won on this trade.
From inside the loop, Anthropic CEO Dario Amodei described the dynamic without flinching:
Note Amodei's word choice: "pretty confident." That phrase is doing the load-bearing work for the entire industry's funding flow. As Romero observes, it is the single piece on which the trillion-dollar architecture rests.
A meaningful slice of the AI capex isn't commercial at all. SoftBank, MGX (Abu Dhabi), the Saudi Public Investment Fund, Mubadala — these aren't ROI-maximizing investors. They view AI infrastructure the way governments viewed nuclear infrastructure in 1955: a strategic position that must be acquired regardless of whether the project ever generates conventional returns. Several governments have decided that being dependent on foreign-controlled AI is unacceptable. Strategic positioning is the return.
Nvidia and AMD have begun investing equity in their own customers. Nvidia's reported $100 billion investment in OpenAI is part of this pattern. The chip vendors have booked record margins from the buildout and are now using those margins to underwrite continued purchases. Risk transfers forward in time and away from the chip-vendor balance sheet — until it doesn't.
In 1999, dotcom losses concentrated in publicly-held equity (Worldcom, Lucent, Cisco, Nortel). The current AI capex setup is structurally different. Sophisticated capital extracts along the way; the structural design transfers risk to public investors at IPO. By the time the bears are "right" about the valuations, the smart money is already out.
Morningstar analyst Brian Colello, quoted by Bloomberg in October 2025, was characteristically dry about it:
So far we've talked about two risks: lab equity (option-priced, casino) and infrastructure capex (correlated with macro fragility). There is a third risk that doesn't show up in either bucket. It's the question of whether AI's theoretical capability ever closes the gap with what real users actually deploy it for.
Anthropic's March 2026 Economic Index introduced an "observed exposure" metric — measuring how much AI is theoretically capable of doing in each occupation versus how much it is currently being used to do. The chart Anthropic published shows large blue regions (theoretical capability) and much smaller red regions (observed deployment). That gap can be read two ways.
The optimistic read: AI has enormous room to grow into. The bear read: AI's "capability" doesn't translate into deployment because real-world reliability is too low. Romero — sympathetic to neither extreme — described what he sees:
Romero's reading: AI is not a diffusion curve. AI is a cliff. Most users are stuck on the bottom shelf because the technology — for their specific use cases — isn't reliable enough to climb further.
The peer-reviewed reliability data supports this concern. Across 14 frontier models from OpenAI, Google, and Anthropic spanning 18 months of releases, capability metrics improved substantially while reliability metrics improved only modestly. The slopes diverge — capability racing up, reliability nearly flat. (See Romero's article for the specific Kapoor / Rabanser / Narayanan paper this draws from.)
Why this matters for investors: if real-world deployment plateaus while frontier capability keeps rising, the labs have to fund increasingly expensive capability gains while their actual revenue base stops scaling. That's a very different risk profile than "AI is a bubble" — it's the radiologist pattern at industry scale. Capable. Not replacing. Not generating breakthrough revenue.
For the AI-Rev scoring methodology, this validates the messiness-gap hedge that's already baked into our industry analysis: deployment lag is real and matters more than capability benchmarks suggest.
Three independent risks are converging on the same time horizon, and that convergence is where the systemic danger actually lives.
Line A (green) — Passive flow trajectory: 401(k) and IRA contributions exceed retirement decumulation today. The boomer cohort's transition from accumulation to decumulation is poised to flip net flows ~2027-2030. When that happens, the structural bid under mega-cap concentrated holdings (the hyperscalers, Nvidia, the Mag 7) erodes — not because anyone decides to sell, but because the buying force that has absorbed sellers for 40 years reverses direction.
Line B (blue) — Infrastructure obsolescence threat: Photonic interconnects shipping now. Photonic inference accelerators 2-4 years from meaningful deployment. The bear case doesn't require photonic to actually displace GPUs — it requires only that the threat of obsolescence becomes credible enough at moment of capex decision to make hyperscaler boards pause new deployments. CFO logic in 2027 looks very different from CFO logic in 2024.
Line C (gold) — Lab funding inflection: The OpenAI IPO is the planned exit for sophisticated holders. If it happens cleanly in 2026-2027, the bagholder transfer completes. If it gets delayed (recession, regulatory shock, capability stall) and the labs have to keep raising private capital while burning $14-17 billion per year — that's where the casino's house gets caught short.
The actual systemic danger isn't any one of these lines individually. It's the overlap: a passive-flow flip and an architecture-obsolescence wake-up and an IPO market closure all hitting in the same 24-36 month window. That's where coordination failure becomes possible.
Which is exactly what Howard Marks was getting at, dryly:
The split between AI deployment and lab equity isn't academic. It generates concrete portfolio guidance:
If OpenAI IPOs in 2026-2027 at multi-hundred-billion valuations, treat it as a venture-style position — small, binary, expect 80%+ chance of underperforming over 5 years and 5-10% chance of returning many multiples. Don't size it as a growth-stock allocation.
The deployment thesis is real and measurable. Avoiding AI-exposed industries because Sam Altman might be overpaid is a category error. The AI revolution happens regardless of what happens to lab equity.
That's where the measurable disruption lives. The legal industry shows the pattern: bar-admitted lawyers grow their share of total legal output while paralegals and entry-level legal support flatline. The same bifurcation is showing up across financial analysis, accounting, software development, customer support, journalism, and content production. Industries are not monolithic. The matrix scores them.
The most important variable is not "is AI a bubble" — it's "does the funding flow keep moving." Watch sovereign capital announcements, hyperscaler CFO discipline (Meta's -6% on capex guidance was the first canary), and IPO timing. If one funding source blinks before the planned exits complete, the cascade risk peaks.
Most investors are picking sides in a debate (bear vs. bull) when the actual question is which industries get repriced and by how much over 1, 2, 3, 5, and 10-year horizons. That's a math question, not a vibes question.
The matrix scores 28 industries across 5 horizons (1yr, 2yr, 3yr, 5yr, 10yr) using 8 analytical dimensions and 167 cross-industry effects. It tells you which industries get repriced upward, which get repriced downward, and by how much — independent of whether OpenAI's equity holds.
See the Matrix Scores Free Portfolio ScanYou're an investor who has read the bear cases and the bull cases and concluded — correctly — that both have a point and that something doesn't add up. You're right. This page is the framework that resolves the contradiction without picking a tribe.
You're an employee who can see AI changing your industry and is wondering whether the lab-equity bubble debate has any bearing on your career. It doesn't. The deployment thesis is real, your industry is being repriced, and the entry-level squeeze is showing up in the labor data already (Anthropic's index shows under-25 hiring in AI-exposed roles dropped 14%). A career-protection guide is in production for this audience specifically — join the early-access list.
You're a skeptic who has seen this pattern before. Good. The framework here is sympathetic to your skepticism. The AI lab equity story is, in many ways, exactly the bubble the bears say it is. The disruption story is exactly the revolution the bulls say it is. Both. At once. Different bets.