Paul Kedrosky argues that the current AI boom is a unique "meta bubble" combining the risky ingredients of every past financial mania.
He explains how complex financing, fragile business models, and a government backstop are fueling a massive data center build-out.
This analysis reveals the hidden financial dangers that could lead to a bust with widespread economic consequences.
Key takeaways
- The current AI and data center boom is a unique 'meta bubble' that combines all the ingredients of past bubbles: real estate, a powerful technology story, loose credit, and a notional government backstop.
- Data center construction acts as a massive, unintentional private sector stimulus program, accounting for as much as 50% of recent GDP growth, a factor often overlooked by policymakers.
- Special Purpose Vehicles (SPVs) allow companies to finance large projects like data centers off their balance sheets, but their complexity often obscures significant future risks regarding ownership if the project fails.
- The short lifespan of data centers, estimated at just three to four years, combined with tenant turnover, creates a risky asset-liability mismatch for the debt structures that fund them.
- The accounting lifespan of AI data center GPUs is much longer than their real-world useful life. Intense 24/7 usage for model training causes thermal degradation that can wear out a chip in as little as 18 months, creating hidden costs.
- To boost low returns from big tech tenants, data center owners are mixing in smaller, riskier tenants who pay more. This strategy increases yield but also introduces risks similar to those found in securitized financial products.
- A 'flywheel effect' can emerge when investors care only about yield. This drives the creation of more projects simply because they can be packaged and sold, a dynamic seen in previous financial bubbles.
- Unlike traditional software with fixed costs, the costs for Large Language Models rise linearly with usage, resulting in negative unit economics where companies lose money on each interaction.
- Financing AI infrastructure involves a fundamental temporal mismatch: 30-year loans are used to pay for GPUs that depreciate in two years, creating constant refinancing risk.
- To solve energy shortages, companies are building their own long-lived power plants for data centers, creating a new risk of stranded assets if the data centers become obsolete or unprofitable.
- When AI is framed as an existential competition between nations, spending becomes unlimited because it's viewed as a matter of survival, creating a global government backstop for the industry.
- The shortage of AI compute is being intensified by strategic hoarding, where companies lock up capacity not just for their own use, but to ensure competitors can't have it.
- The transformer models that power modern AI went from lab to market so quickly that they are likely bloated and highly inefficient, leaving significant room for improvement through alternative methods like distillation.
- When justifying massive spending, AI companies often pivot from building practical business tools to a 'faith-based' argument about AGI, treating it as an infinitely valuable call option.
- The most likely future of AI may not be glamorous AGI, but the widespread use of small, cheap 'micro models' to handle mundane back-office tasks like matching records.
- The financial risk from a potential AI bust extends to the broader public through equity funds holding retirement money for workers like firefighters and teachers, and even real estate investments in data centers.
Podchemy Weekly
Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.
Paul Kedrosky on the data center meta bubble
Paul Kedrosky became laser-focused on the data center story after a surprising discovery in GDP data. He noticed that data centers accounted for a massive fraction of GDP growth, around 50%, in the first half of the year. He compares policymakers who misunderstand this to his dog, who barks at the mailman and believes its barking causes the mailman to leave. In reality, the mailman would leave anyway. Similarly, politicians might attribute GDP growth to factors like tariffs or consumer resilience, when the real driver is a massive, unintentional private sector stimulus program in the form of data centers.
Paul frames this current boom as a historically unique "meta bubble." He argues it's the first time all the major ingredients of every historical bubble have been combined into a single event.
Let's create a bubble that takes everything that ever worked and put it all in one. And this is what we've done.
This meta bubble has several key components. It has a speculative real estate element, as data centers sit at the intersection of industrial spending and property. It possesses one of the strongest technology stories ever, comparable to rural electrification. It is fueled by loose credit, with private credit having largely supplanted commercial banks as lenders. Finally, it has a kind of notional government backstop, similar to the role Fannie and Freddie Mac played in the housing bubble. Because all these powerful factors are present, Paul believes the notion that this can end in a gentle landing is "nonsense."
A brief side conversation notes how the term "private credit" has replaced the more sinister-sounding "shadow banking." Paul calls it a "financial witness protection program." This market has grown to a stunning size, around $1.7 trillion.
The financing game behind AI infrastructure
Massively profitable and cash-rich public companies like Google and Amazon face a dilemma when funding enormous projects like data center builds. They could fund them internally, but shareholders get upset when earnings per share are diluted by diverting too much cash to a single source.
Public hyperscalers were already spending around 50% of their free cash flow on data centers. At that point, they reached a limit, as they have other uses for that cash, including showing profit. To get around this, companies like Meta are using creative financing. They create Special Purpose Vehicles (SPVs), bring in other lenders and private credit firms, and keep the debt off their own balance sheets.
...we play this entertaining game of it's not really our debt, it's in an SPV, I don't have to roll it back onto my own balance sheet and then bring in new lenders, new private credit firms and others.
Private companies like OpenAI and Anthropic are in a different position. They don't have the luxury of massive, consistent cash flows. For them, raising external financing is not a choice; it's the only option they have to fund their ambitious projects.
SPVs are used for off-balance-sheet financing of data centers
A Special Purpose Vehicle, or SPV, is a financing tool used to keep debt off a company's main balance sheet. Mechanically, partners contribute capital into a separate legal structure. This entity then retains legal title to the project, such as a data center. This allows a company to raise capital for the project without that debt being rated on its own balance sheet.
These structures can be incredibly complex. For example, a recent Meta project with Blue Owl was described as wild and Byzantine.
It looks like something you might have seen in what was that in Harry Potter, the forest with all the spiderwebs. It looks a little like that where everything's connected to everything. And all I know is there's something in here is going to get me.
This complexity obscures future risks. Right now, no one is focused on what happens if the project underperforms. It's often unclear who will own the asset, what payment exchanges will occur, or what the recourse payments will be if ownership has to revert. These details will become very important in four or five years, but are currently being ignored.
Adding to the risk is the short lifespan of data centers, which may only be three or four years. This, along with tenant turnover, can create a significant asset-liability mismatch within the debt structure.
The hidden financial risks of AI data centers
About four years ago, tech companies extended the depreciation schedules for assets in data centers. This made sense for hardware used for simple storage, like in AWS S3 buckets, because those chips are not used intensively and have a long lifespan, perhaps six to eight years. However, this change coincided with the rise of GPU-driven data centers for AI. The GPUs in these centers have much shorter lifespans for two main reasons. First, technology changes quickly, and companies want the latest chips. Second, and more importantly, these chips are used differently. When training an AI model, a GPU runs flat out, 24/7, leading to significant thermal degradation.
It's like saying, I bought a used car and I don't care what it was used for. Well, if it turns out it was used by someone who was doing Le Mans 24 hours endurance with it, that's very different. Even if the mileage is the same as someone who only drove it to church on Sundays.
A GPU run this intensely might only have a useful life of 18 months. This creates a huge mismatch between the long depreciation schedule and the chip's actual lifespan, which has major implications for a data center's true capital expenditure requirements and profitability.
Data centers can also be viewed as giant commercial real estate properties with tenants. A data center with a single, high-credit tenant like Google will have a very low return, or cap rate, because Google has immense bargaining power. To improve yields, data center owners blend in different kinds of tenants, including smaller, 'flightier' ones who pay higher lease rates. While this boosts the overall yield of the data center, it functions like securitization, increasing the risk profile of the investment.
Financial crisis echoes in AI data center securitization
The financing of AI data centers is increasingly using asset-backed securities. These are securities tranched into different layers of risk, allowing various investors to get the specific exposure they want, from senior debt to equity. While these financial tools are not inherently bad and are standard in project finance, the current situation feels familiar to those who remember the 2008 financial crisis.
The main issue is the scale and the creation of a 'flywheel effect.' A secondary market develops where the purchasers of these securitized assets become detached from the underlying business. They are primarily interested in the yield the asset generates, not the specifics of the AI operations within the data center.
The people who are purchasing those things don't give a rat's ass what's going on inside this AI. They don't care what's going on inside the data center. It could be the World Hide and Go Seek Championships going on in there. I don't care, as long as it generates yield and I can securitize it.
This dynamic is analogous to past financial crises. It creates a feedback loop where the demand for yield-producing securities from customers incentivizes the creation of more of the underlying assets, simply because they are easy to securitize and are backed by big names like Meta or Google. In a market environment where revenues are surging, this adds a layer of 'pixie dust' that makes the investments seem even more attractive.
The fragile unit economics of AI
Current generation Large Language Models have a significant problem: negative unit economics. Unlike traditional software where costs are relatively fixed and can be spread across more users, the costs for LLMs rise more or less linearly with usage. Paul Kedrosky clarifies this with a classic business saying.
The term of art obviously is these things have negative unit economics, which is a fancy way of saying that we lose money on every scale and try to make it up on volume.
Paul emphasizes that this doesn't mean AI is just a fad; it's an incredibly important technology. The discussion is about its funding and business models. He outlines several ways analysts try to make the math work, each with its own flaws. A bottoms-up model might suggest that if every iPhone user paid $50, it would generate enough revenue, but that's an unrealistic assumption. A top-down model, which Paul critiques, looks at the total addressable market (TAM) for global human labor and assumes AI can capture a small percentage of it. He notes this is a classic cliché, like trying to get 5% of the Chinese market, which never happens.
Another approach focuses on industrial or API users. For example, a large portion of Anthropic's revenue comes from just two large software developer clients, Copilot and Cursor. This creates a huge fragility and customer concentration risk. If one of those customers decides to build their own internal model, as Cursor recently announced, a significant portion of revenue can disappear overnight. All these models to justify AI's economics have a lot of built-in fragility.
Temporal mismatch and stranded assets in AI financing
When financing the massive capital expenditure for AI, the GPUs themselves serve as the collateral. However, this creates a significant problem: a temporal mismatch. Paul Kedrosky explains there are 30-year loans being used to finance GPUs, which are income-producing assets that effectively depreciate in about two years. This discrepancy creates a constant refinancing risk. While it might be easy to refinance now, it may not be in the future. He points to a wave of refinancings coming around 2028 for speculative data centers, questioning if they will be able to turn over their debt.
This issue is compounded by the scarcity of energy. A major gating factor for AI growth is the difficulty in securing power. Paul gives an example of Amazon connecting three new data centers to the grid in Oregon, only to be told there was no power available for them. This has led to companies building their own power sources, such as natural gas plants, directly connected to their data centers. This, however, introduces another temporal mismatch. A natural gas plant is a long-lived asset, running for 25 to 30 years. The ability to forecast a data center's income over that same period to pay off the power plant is nearly impossible. This creates a huge likelihood of ending up with stranded assets: natural gas plants built for data centers that are no longer useful.
God help you if you think you can sort that out because what you've really got is a huge likelihood of a stranded asset out there. Natural gas plants that are no longer useful for powering these things that they were built for.
The conversation also touched on the massive funding announcements in the AI space. Paul views this as a game-theoretic deterrence program. By announcing a $100 billion spending plan, a company effectively tells competitors not to even bother with their own $50 billion plan. It is like everyone showing up at the O.K. Corral, each trying to show they have more firepower to deter others from even starting a fight.
The AI arms race is fueling a compute hoarding strategy
The AI race is often framed as an existential competition between nations. Paul Kedrosky notes that once you call something existential, the limit on spending becomes unlimited because it is about survival. This has led to the idea of a government backstop, where the state can theoretically print as much money as needed to win. This isn't just a US versus China dynamic; it's a global phenomenon. Countries like Canada, the UK, and Germany are also pushing to build their own "sovereign AI," creating a worldwide government-backed push that provides a limitless source of capital for the industry.
This competitive environment influences corporate strategy. Companies are looking to expand vertically, with Anthropic wanting to get into data centers and Sam Altman considering owning semiconductor fabs. A peculiar example is Meta, a hyperscaler with its own data centers, buying compute from a smaller "neocloud" provider called CoreWeave. According to Paul, the reason for this is simple: hoarding. Companies are locking up compute capacity wherever they can find it, not necessarily because they need it, but to prevent competitors from getting it.
This is a way of making sure that I may not need this, but you sure can't have it. And so there's an element of compute hoarding going on across the map because of this backlog in building data centers that may or may not ever get built.
This strategy of controlling a resource to block others is driven by the long delays in building new proprietary data centers. He compares this behavior to the Hunt Brothers attempting to corner the silver market.
China's AI distillation approach may have an edge
The US and China appear to be in an existential race for AI supremacy, but they are taking very different approaches. The US strategy revolves around spending vast amounts of money to develop state-of-the-art, mostly closed-source models. In contrast, China's approach seems focused on rapid adoption and creating open-source models that enter the market faster and more cheaply.
Paul Kedrosky suggests that an approach closer to China's is likely to prevail. He reframes their strategy not just as open source, but as a form of "distillation." He uses an analogy to explain this: a sales manager doesn't train every individual salesperson. Instead, they train one person who then trains everyone else. This is the "train the trainer" model.
In AI, this means using large, compute-intensive models from companies like OpenAI or Anthropic to train smaller, more efficient models. This distillation process dramatically speeds up the creation of new models. This is possible because the current foundation of large language models, transformers, are incredibly inefficient. They moved from the lab to the market faster than any product in tech history.
The transformer models that underlie large language models that are so computationally intensive went from the lab to the market faster than any product in technology history. So they're absolutely bloated and full of crap.
The US followed a path that led directly from the original 2017 transformer paper. China, however, is demonstrating that there are huge efficiency gains to be had through methods like distillation. Models like Kimi, a recent open-source model, are proving to be very effective and benchmark well because they have been trained by other powerful models.
AI's business reality versus its faith-based AGI dream
Current forecasts for the future demand for computing power may be completely wrong. Training models constitutes 70% of the workload on data centers, but the current approaches to training are bloated and inefficient. This raises the question of what AI companies truly believe they are building.
One theory is that they are creating practical business tools. If so, these tools could eventually become 'good enough', at which point the massive investment in building them would cease and cash flow would increase. The alternative is a more science-fiction-oriented goal of building Artificial General Intelligence (AGI), a pursuit that would require a near-infinite amount of resources.
Paul Kedrosky explains that companies publicly state they are building effective productivity tools. However, when challenged on the return on investment for their enormous spending, the argument shifts to what he calls a 'faith-based' justification for AGI. They frame it as the ultimate call option. Paul finds this logic flawed.
What would you pay for a call option that could get you anything? It's like, well, wait a minute, this isn't a way of justifying any particular expenditure. This is just faith based argumentation.
Meanwhile, the problems AI is actually solving are often mundane, like onboarding new suppliers by correcting their zip codes. Small, efficient language models are excellent at these tasks. These 'micro models' are cheap, require a fraction of the training, and cannot justify the massive spending of the current AI boom. This practical, less glamorous future seems most likely, driven by widespread use of micro models for routine background tasks.
Sarah Friar adds that she is still annoyed that AI is being applied to fun, creative fields like art and music rather than the tasks she doesn't want to do, such as folding laundry or matching customer records.
Why the AI investment boom is a risky stimulus
The massive AI capital expenditure in the US can be viewed as a private sector stimulus program. According to traditional economic theory, it doesn't matter what people are paid to do, even digging holes and filling them back in, as long as money is circulating in the economy. However, this view is misleading in the context of AI investments.
Unlike simple stimulus, these are investments created with an expectation of a return. If they fail to deliver, it could trigger a massive negative wealth effect that flows backwards through the economy. This risk is amplified because a large portion of the S&P 500 is tied to these AI-related companies. The concern is that these investments might be a wasting asset, built on flawed assumptions and projections about future demand for AI services.
This creates a deep structural problem, and the potential fallout is not confined to private equity firms. The risk extends to the broader public, including firefighters and teachers whose retirement money is in equity funds. It also affects real estate, as REITs increasingly hold data centers as major assets.
