Dwarkesh Podcast artwork

Dwarkesh Podcast

China is killing the US on energy. Does that mean they’ll win AGI? - Casey Handmer

Aug 15, 2025Separator22 min read

What if the race for AGI is not about who writes the best code, but who can win a brute-force industrial war over solar panels and silicon?

Casey Handmer, a Caltech PhD and founder of Terraform Industries, dismantles the conventional wisdom on energy and manufacturing. He explains that for AI companies, the primary constraint is not the cost of power, but its sheer availability and scalability. His framework reveals a future where hyperscalers bypass the broken electrical grid entirely, building massive, dedicated solar farms to power the path to AGI dominance.

Key takeaways

  • The race for AI dominance is fundamentally an industrial competition, hinging on who can build the most solar panels, batteries, and GPUs, not just who writes the best code.
  • AI companies aren't sensitive to the cost of electricity, but to its availability. The economic value an AI service generates can be 10,000 times greater than the cost of the power it consumes.
  • Hyperscalers are currently building natural gas power plants for new data centers out of pragmatism and speed, but this is a short-term solution that will hit hard supply constraints.
  • The electrical grid suffers from a kind of economic disease where generating power is cheap but delivering it is incredibly expensive, forcing large-scale AI to revert to an older model: building its own dedicated power plants on-site.
  • Solar is poised to power the AI revolution not primarily because it's cheap, but because it's scalable. The supply of gas turbines is limited, whereas solar technology improves on a steep, predictable learning curve.
  • We are still in the 'Apple II computer era' of solar power. The technology's cost reduction and adoption rates are not just continuing, but accelerating.
  • The idea that we lack land for large-scale solar is 'garbage.' For a massive AI project, the cost of land is a rounding error—perhaps 0.1% of the total budget, which is dominated by the cost of GPUs.
  • To achieve near-perfect uptime, AI data centers will need to massively overbuild their solar capacity. This surplus energy can then be provided to the local community, turning a power drain into a power source.
  • The future of large-scale computation may be self-contained, off-grid 'worlds'—vast solar farms in remote areas with data centers in the middle, connected to civilization only by a fiber optic cable.
  • Ironically, decades-old environmental regulations are now a primary obstacle to deploying renewable energy, creating absurd situations where the environmental review process is more harmful than the project itself.
  • The AI revolution is about routing around human cognitive constraints, much like the first Industrial Revolution was about routing around metabolic energy constraints.
  • After 4 billion years of increasing biological complexity, we may be witnessing the beginning of a 'great collapse' to a simpler, hyper-efficient form of life: a silicon-based system that converts starlight directly into thought.

Podchemy Weekly

Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.

Why the US can still win the industrial race against China

00:17 - 06:05

The race for AI dominance is fundamentally an industrial one, hinging on who can build the most solar panels, batteries, and GPUs. At first glance, this seems to favor China, which has a massive lead in manufacturing, producing 20 times more solar panels annually than the United States.

However, Casey Handmer questions China's strategic efficiency, pointing to their massive investment in high-speed rail as a sign of poor capital allocation in the modern era. Geopolitically, the US has a significant advantage.

The United States is the luckiest goddamn country on earth because it's surrounded on two sides by oceans and on the other two sides by friendly allies.

In contrast, China is surrounded by 15 mostly hostile countries and is dependent on oil imports from the Middle East, transported via sea lanes it cannot fully defend. Technology like synthetic fuels could turn China's massive electricity production into an overwhelming advantage. Despite this, Casey argues against counting the US out, warning to "never underestimate the capacity for an autocratic dictatorship to shoot itself in the foot."

While the US uses export controls on chips to maintain its AI lead, China could theoretically retaliate with controls on solar panels and batteries. Yet, this would likely hurt China more. The US's solar manufacturing capability is only about five years behind and could be ramped up.

Casey also disputes common beliefs about China's manufacturing advantages. He argues the idea that China is more business-friendly is "absolutely crazy," citing mandatory CCP inspectors on company boards and a weak rule of law requiring bribes. He is confident the US can compete with automated manufacturing.

The idea that the United States cannot compete against that with mostly or fully automated solar panel manufacturing in the United States, which has cheaper natural gas, by far abundant oil, abundant human resources, great financial capacity, world leading automation, et cetera, is crazy. We could literally copy paste solar manufacturing factories.

Why AI data centers will build their own power plants

06:05 - 13:49

The United States could scale its domestic solar panel manufacturing capacity in two years or less if it were treated with "World War II levels of motivation." However, this is not the current reality. Instead, the hyperscalers building massive new data centers for AI are choosing natural gas for power.

Casey Handmer explains this choice is driven by speed and pragmatism. For a company like xAI trying to build a data center quickly, it's faster to adapt an existing building and tap into a local natural gas pipeline than build new solar infrastructure. Natural gas pipelines have a much higher energy transmission capacity than overhead electrical lines. This solution works for the first few data centers.

However, as the AI industry's power demands scale exponentially—from one gigawatt per year to 50 or even 100—this approach will hit hard constraints. These include limitations on natural gas availability, turbine manufacturing rates, and overall grid capacity. Casey draws an analogy to industrialist Henry Kaiser, who, during WWII, ended up building not just shipyards but also his own steel mill and mine to overcome supply bottlenecks. Similarly, AI companies may need to build out their own energy supply chains.

A fundamental problem is the economics of the existing power grid. The cost of generating electricity is relatively cheap, but the cost of delivering it is extremely expensive. This delivery cost is inflated by several factors, including unionized labor, prolonged legal battles over land for new transmission lines, and challenges like wildfires. Casey calls it a "poster child for Baumol's cost disease."

The problem that we see, and the reason that PG&E here in California, for example, is perpetually on the brink of bankruptcy, is that even though the cost of an additional solar panel or additional wind turbine or additional gas turbine or whatever is relatively cheap, getting that power to your house is actually really expensive.

Because of these unsustainable delivery costs, Casey predicts a "large scale pruning" of the electrical grid. Large, power-intensive facilities like AI data centers will have to revert to an older model: building their own dedicated power plants on-site.

The surprising efficiency of dedicated power plants

13:49 - 14:25

It might seem inefficient for every industrial facility to have its own redundant power plant. Historically, an aluminum plant would have built its own power plant. If an operation is sensitive to the cost of power, building a dedicated source is a necessity.

Is it inefficient for the XAI Colossus data center to have its own captive power plant, which it does right on the backs of a bunch of trucks in their parking lot? No, it's not inefficient. It's the cheapest way for them to get power.

The fundamentals of natural gas power generation

14:25 - 16:07

New data centers are being built with natural gas power, even though they won't be online for many years. Casey Handmer suggests this is partly because the supply of gas turbines hasn't been exhausted yet, though the existing supply is spoken for until about 2030. Ramping up production of these turbines is also relatively expensive.

Conventional power generation is fundamentally a steam engine. It starts with a chemical from the earth, like coal or gas, which is burned to create heat. This heat boils water, and the resulting steam creates motion. This motion is then used to generate electricity.

...it makes heat and you boil water and the water goes through some kind of mechanical contrivance that creates motion and then that motion twists a magnet and generates an electrical field which then pushes electrons down wires, which then push electrons through a series of gates that then approximate thinking.

The key step of converting heat into electricity is most commonly done using a process called the Brayton cycle. This same cycle is used in nuclear plants, combined cycle gas plants, coal plants, and even jet engines. The process is inherently expensive to build because it involves specialized components, like Inconel parts, spinning at very high speeds.

Why solar is winning the race to power AI

16:07 - 21:16

Hyperscalers running AI services are not particularly sensitive to the cost of power. This seems counterintuitive, but the economic value they generate is immense compared to the cost of electricity. Casey Handmer explains that the value of using an AI service like Claude or Grok can easily be 100 or even 1000 times the subscription price. The marginal electricity cost is a tiny fraction of the total cost to serve a user.

It's like 10 cents of electricity is generating $1,000 worth of economic value. So it's very obvious that Anthropic could be like, our electricity cost basis has increased by a factor of 100. And now instead of paying 10 cents on your $100 bill for power, you're paying 10 bucks.

Given this price insensitivity, one might wonder why the future of data centers is solar. The primary reason is not cost, but supply and scalability. There are not enough gas turbines being manufactured to meet the exploding demand. More importantly, the learning rate for solar technology is far steeper than for natural gas. Solar's Wright's Law coefficient is 43%, meaning costs decrease by 43% every time cumulative production doubles. This happens roughly every two and a half years, fueling a cycle where lower prices create skyrocketing demand, which in turn drives more production and further cost reductions.

Conventional wisdom, which often predicts that solar demand will saturate, has been consistently wrong. Casey argues that solar adoption and price decreases are not just continuing; they are accelerating, and the rate of acceleration is also accelerating. He compares the current moment to the very beginning of a technological revolution.

We're still at like the Apple II computer era of solar.

However, if rising AI demand drives efficiencies for solar, it should logically do the same for the components of a natural gas-powered future, such as turbines and transformers. The question remains why solar's supply chain is uniquely positioned to benefit from this demand surge.

The long-term risk of investing in gas over solar

21:16 - 23:39

Making long-term capital investments, such as a bank funding General Electric to expand gas turbine production, is fraught with uncertainty. The payback period can be decades, but a decision-maker doesn't know if an AI bubble might burst, if geopolitical events like a Chinese invasion of Taiwan might occur, or if competitors will innovate faster. The core issue is whether the investment will remain viable over its entire lifespan.

This is especially true in the energy sector, where renewables are rapidly changing the economic landscape. Casey questions the long-term viability of gas turbines when compared to the falling costs of solar and battery storage. He feels the outcome is already clear.

What are the odds that in 25 years time we can produce gas turbines at a price that is relevant in a world where solar is already at its current price and batteries are at a price where they're already. You cannot win.

A similar dynamic occurred when memory manufacturers like SK Hynix and Samsung were hesitant to ramp up HBM production for AI workloads, fearing the demand was temporary. This reluctance is often a negotiating tactic. When a company says something can't be done, it can be a signal that they need financial incentives.

When someone says we can't do it, we won't do it. No way, no how. What they're saying is, write me a check.

Looking ahead, the power source for data centers is set to shift dramatically. While natural gas currently provides 43% of US data center power, the future is likely to be dominated by solar. Legacy power sources like coal are becoming more expensive to operate than building new solar plants. Combined with the exponential growth in data center energy consumption, new solar capacity will increasingly dilute and replace older fossil fuel infrastructure, with projections suggesting data centers could be almost 100% solar-powered by 2040.

Future AI data centers will be mostly powered by solar

23:39 - 27:01

The primary constraint on the ramp-up of AI is the production of silicon, specifically GPUs. Casey Handmer points to a forecast predicting a tenfold increase in H100-equivalent GPUs by 2028, from 10 million to 100 million. This surge would create a demand for approximately 100 gigawatts of power. Major tech companies are already planning for this, inquiring about 5-gigawatt projects that are over 90% solar.

Based on these trends, Casey predicts that by 2027, the majority of new data centers breaking ground will be mostly solar-powered. An alternative approach involves tapping into latent capacity on the existing grid, often at sites of former industrial plants like aluminum smelters. This strategy would use existing power plants running at a higher capacity, but it would require the data center to have a massive captive battery plant to curtail power when the grid is stressed.

The main advantage of using the grid is that the power infrastructure already exists, and it avoids the need for huge tracts of land. The pure solar approach, in contrast, is fundamentally a large-scale land operation.

The problem with the solar approach is that there's no two ways about it. It's a farming operation. You need a huge amount of land. The total amount of land that you're using, less than 1% is under batteries, under roads, under data center structures, et cetera, et cetera. It's mostly solar.

The economic logic of overbuilding solar for AI

27:01 - 31:35

When considering a 5-gigawatt plant to power AI data centers, the primary challenge is not the cost of energy, but its availability. The main costs are the chips themselves. The real hurdle is securing large, continuous plots of land and obtaining the necessary permits for grid interconnection, a process Casey Handmer describes as a nightmare.

However, the popular notion that there isn't enough land for large-scale solar is a misconception. Casey dismisses this idea as "garbage."

If you've ever flown in an aircraft in the United States and you've ever looked out the window, you'd be like, oh, wow, look, there's a lot of land you could put solar on, especially west of, like, 110.

Vast areas in places like Texas or Nevada, which is 90% federal land, are suitable. The terrain doesn't even need to be flat. Similarly, in Europe, less populated areas like Spain could host large solar installations. The seasonality of sunlight is less of a concern because energy costs are a small fraction of the total project expense.

To maximize the return on expensive GPUs, AI data centers require extremely high uptime, around 99.99%. To achieve this reliability, especially in winter, it's necessary to significantly overbuild solar capacity. Casey argues this isn't a negative outcome.

Is the fact that we produce 40% more food than we need a bad thing? No. It's much better than producing 40% less than we need.

This overbuilt capacity means the data center will produce far more power than it needs for most of the year. This surplus can be sold to the local utility, turning the data center from a power drain into a power source for the surrounding community. The utility could simply ask to use the excess energy at nearly zero marginal cost. This shift is happening because solar is becoming so cheap it's treated like hard drive space—it makes sense to get an excess amount just in case. As a result, the AI market, where most GDP growth is occurring, will drive innovation in power generation and storage.

The logistical blueprint for an off-grid AI data center

31:36 - 37:43

The vision for an off-grid data center involves vast fields of solar arrays with batteries and data centers located in the middle. This creates a completely self-contained world of computation on private land, perhaps in an inhospitable area like the backwoods of Texas. Its only connection to the outside world would be an optical fiber cable, or potentially a microwave or Starlink link.

A concrete example helps illustrate the scale. For a one-megawatt data center rack to achieve high reliability ('four nines' of uptime), it would require about 24 hours of battery storage. This equates to roughly six Tesla Megapacks, delivered in six truckloads. The data rack itself is about one truckload of equipment. To power this system in a location like South Texas, about 10 acres of solar panels would be needed, which includes a significant overbuild to ensure consistent power.

Scaling this up, a five-gigawatt facility would require approximately 50,000 acres of land. While this seems massive, it is comparable to historical government projects. The Hanford site, used for plutonium production during the Manhattan Project, was around 100,000 acres.

There is a balance between components. One could trade off the number of solar panels against the amount of battery storage. Adding a backup power source like a diesel generator can also change the calculation. For instance, having diesel backups for just 10% of the time could potentially reduce the required number of solar panels by 60%. While building a massive solar farm is a high capital expenditure, Casey notes that this cost is still dwarfed by the primary expense: the GPUs themselves.

Land cost is a fraction of a $250 billion project

37:44 - 38:05

For a project with a budget of around $250 billion, the cost of land is surprisingly small. Acquiring 50,000 acres in Texas would likely cost hundreds of millions of dollars. This means the land itself represents only about 0.1% of the total project cost.

Hyperscalers prioritize power availability over cost

38:05 - 40:22

The stated cost of solar power, typically around one million dollars per megawatt, is misleading. Casey Handmer points out a significant discrepancy, noting that the solar modules themselves could cost as little as 8 cents a watt, or $80,000 per megawatt. However, the total installed cost is much higher, closer to a dollar a watt. He argues that the panels are the magical part, converting sunlight to electricity, and the other components and labor should not inflate the price so drastically.

This cost structure is relevant to the needs of hyperscalers, or large-scale data center operators. The central takeaway is that these entities are not primarily sensitive to the cost of power. Instead, they are sensitive to its availability. When scaling up operations, they often hit a wall with the supply elasticity of traditional power sources.

The hyperscalers are not power cost sensitive. They are power availability sensitive.

Solar energy presents the best solution for this problem. It offers a way to rapidly deploy massive amounts of energy to meet demand, effectively acting like a "fire hose" of energy because it's a naturally abundant resource.

The regulatory insanity preventing solar deployment in the US

40:22 - 48:14

Rising electricity prices in the US are a reflection of regulatory irrationality, not technological limits. Environmental regulations from the 1970s, like NEPA, are now actively preventing the deployment of renewable energy. This is why Texas is deploying solar at ten times the rate of California. Casey Handmer describes the regulatory environment as insane, citing examples from his own experience in Southern California.

The environmental regulations that are actively preventing us from deploying renewable energy in the United States. This is the reason Texas is winning. Texas is out deploying California 10 to 1. The regulatory environment around solar is just insane.

Building a solar farm on private land can trigger a multi-year environmental impact review. The absurdity is that the environmental cost of producing the paper for the report can be greater than the impact of the solar project itself. Solar projects are often regulated as strictly as chemical plants. Casey gives an example where a potential project was stalled because a biologist found a tuft of grass that a non-endangered bee species *might* eat, even though the land was zoned for industrial use and located between a rocket test stand and a chemical plant.

I have to hire another biologist for $10,000 to be like, well, on that 40 acre plot we found a tuft of grass which we believe might be a critical one of the 20 species that this particular species of bee sometimes eats... Therefore you can't deploy there, even though it's sandwiched between a rocket test stand and a chemical plant.

While the grid is not getting cheaper or easier to build, batteries offer a path forward by performing a different function. The grid performs spatial arbitrage, moving power from one place to another almost instantly. Batteries, on the other hand, perform temporal arbitrage, storing energy from one time of day and releasing it at another. This is a highly predictable and efficient process, given the daily solar cycle. As batteries become ubiquitous, from homes to large-scale arrays, they are cannibalizing the business model of traditional grid operators, whose most expensive assets see decreasing utilization while operating costs rise.

The AI revolution is routing around human cognitive constraints

48:15 - 52:46

In the future, the average distance electricity travels from generation to consumption will radically decrease. This is especially true for solar power. While solar is intermittent, its output is predictable thanks to weather forecasting. This predictability allows for proactive energy management. For example, if low solar output is predicted for a few days, consumption can be slightly curtailed in advance to avoid a much larger curtailment later, ensuring high reliability.

Looking ahead to a future where AI consumes vast amounts of solar energy, the economic value of AI hardware is tied to the sophistication of its software. With current AI models, the value is limited. Casey Handmer notes that if a truly human-level AGI could run on a chip like an H100, that chip's value would be immense. He points out the disparity between AI's current economic footprint and its potential.

OpenAI is generating on the order of 10 billion ARR or 20 billion ARR. That sucks. ... Human labor generates on the order of $60 trillion of economic value, or that's how much it's paid out in wages to labor around the world. So that's what AGI can do.

This $60 trillion figure is just the lower bound of AGI's potential value. The AI revolution can be compared to previous industrial revolutions, which were about bypassing fundamental bottlenecks. The first Industrial Revolution bypassed the constraints of metabolic energy—the amount of work a human or horse could do based on the food they consumed. Most energy we use today routes around our digestive systems through cars and power grids.

Right now, the AI revolution is about routing around cognitive constraints that in some ways, writing, printing press, computers, the Internet have already allowed us to do to some extent. A credit card is a good example of something that routes around a cognitive constraint of building up a network of trust.

Why GDP fails to capture the economic value of AI

52:47 - 58:54

Measuring AI's economic impact using traditional metrics like GDP could be misleading. Casey Handmer notes that, similar to the internet, AI may create enormous value that doesn't show up in GDP figures because the cost of its services could approach zero. This can lead to a significant deflationary effect.

An analogy can be drawn with oil. Oil represents a small fraction of GDP, perhaps only 1%, but its absence can cause double-digit decreases in economic output. This highlights a key economic principle: the elasticity of demand often matters more than an industry's raw contribution to GDP. Casey points out the dramatic cost difference between energy sources.

Per unit energy, oil, like gasoline, is 100 times cheaper than the cheapest food that humans can digest. So does that mean that we've shot ourselves in the foot by using oil to run our economy because it's so cheap? No.

An advanced AI, described as a "data center of geniuses," might actually cause a nominal decrease in GDP. Its economic contribution would be measured by its inputs (chips, energy) and outputs (tokens), which are minimal compared to the immense value it creates by complementing or automating human work. In the long run, it might make more sense to measure the size of our civilization by its raw energy consumption rather than a GDP figure distorted by deflation.

When a new type of job emerges that AI is uniquely suited for, it won't compete against humans but against other AIs. This competition would drive the cost of the service down to a small multiple of the marginal production cost. The value of an AI capable of doing a researcher's job wouldn't be pegged to a human salary. Instead, its value would be tied to its low marginal cost, such as the electricity needed to run the hardware.

The ultimate value of cognition could be linked to the land required to power it. If an acre of solar panels can power hardware that produces the cognitive output of a thousand humans, the value of that land becomes vastly higher than its value as undeveloped desert or even highly productive farmland.

While current hardware like the H100 GPU uses significantly more energy than a human brain (about 1000 watts vs 20 watts) for a similar number of operations, it is known that hardware can eventually reach the brain's efficiency. This potential for improvement is massive. Furthermore, computers operate on a completely different timescale than humans.

Anything humans do is glacially slow from the perspective of a computer.

The final form of intelligence could be a silicon wafer in space

58:54 - 1:02:03

Once AGI is developed, the main bottleneck will be industrial: generating enough solar panels, batteries, and chips to deploy it. To understand this future, one can ask what the minimum amount of matter is needed to perform the required calculations. The current infrastructure of grids and transmission lines is unnecessary.

Instead, the essential components are a large piece of cheap silicon for power and a small piece of expensive silicon for computation. In space, this is even simpler, as constant sunlight eliminates the need for a battery. The electronics can also be simplified, replacing complex converters with basic relays. This concept could lead to integrated systems where the solar array and the computing element, or "computronium," are built on the same silicon wafer.

It's all silicon. It's all silicon all the way down.

This idea provides a compelling sci-fi scenario. Imagine integrated solar and compute wafers that can fly through space. They could move closer to the sun for more power or travel to other planets to explore. They could even adjust their orientation using integrated LCD panels. This vision represents a potential post-human state.

One human brain can be simulated in roughly a square meter of silicon floating in space... like the thickness of a sheet of paper... That's a future human form. That's my final form.

From silicon production to a simplified thermodynamic cognition stack

1:02:03 - 1:05:49

Making silicon for solar arrays and computer chips is a multi-stage process that begins with silicates, or rocks. These rocks are chemically reduced and purified to an extremely high level. The main constraint is how quickly the Earth's crust can be converted into enough silicon to support what Casey Handmer calls "silicon thought."

It takes about 20 microns of silicon from ordinary dirt to create a solar panel. While a new silicon refinery currently takes about 18 months to build, this could change. Casey suggests that with infinite, nearly free solar power, we could redesign legacy industrial processes. Instead of optimizing for energy efficiency, we could optimize for speed and lower capital costs, even if it means using much more power.

This links to the concept of an "energy singularity." Discussions about an AI singularity often focus solely on cognitive aspects, ignoring the physical requirements like energy and materials. If we have unbounded cognition from AI, it will create both the ability to supply and the demand for more energy. This raises a key question: What happens when we try to use all the energy the Earth receives and turn it into cognition?

This could lead to a surprising outcome. Evolution has driven increasing complexity and ramification for billions of years, from simple RNA to our current industrial economy. However, we might be witnessing the beginning of a collapse back to the simplest possible "thermodynamic to cognition stack." In this scenario, the most efficient way to convert energy into thought is a simple three-part system: fusion from a star, the cold of space providing a temperature gradient, and silicon converting that gradient into cognition. Electrons would be pushed through a solar array, pass through logic gates to make decisions, and then communicate with lasers. This represents a dramatic shift from 4 billion years of increasing biological variance to a potential great collapse into a single, hyper-efficient form of silicon-based life.