What if the greatest risk in the race to build artificial intelligence is not its multi-trillion-dollar price tag, but the failure to spend enough?
Satya Nadella of Microsoft and Sam Altman of OpenAI, the architects of one of technology's most pivotal partnerships, dismantle the conventional wisdom on capital and compute. They reveal a core philosophy for the AI era: value is shifting from software applications to the "token factories" that produce intelligence. In this new world, being supply-constrained is a bigger threat than overspending, making audacious, large-scale investment the only path forward.
Key takeaways
- The biggest risk for an AI company isn't spending too much on compute; it's spending too little. Without massive compute, you can't build the next-generation models required to generate revenue at scale.
- OpenAI's unique structure pairs a for-profit arm to raise capital with a massive nonprofit foundation. This allows it to pursue market-driven products while also tackling societal problems that capitalism alone can't solve, like curing diseases.
- The current bottleneck for AI's growth isn't a shortage of chips, but a lack of physical infrastructure—specifically, power and data centers ready to house the hardware.
- Demand for AI compute is highly elastic, like energy. If the price per unit of intelligence dropped 100x, usage would skyrocket by much more, enabling applications that are currently economically unfeasible.
- A powerful signal for a good idea is when a thoughtful skeptic changes their mind. Bill Gates was initially wary of the OpenAI deal but became a believer after seeing a GPT-4 demo, calling it the best he'd seen since the original graphical user interface.
- The future of human-computer interaction is shifting to "macro delegating and micro steering." You'll assign a large task to an AI and provide periodic guidance, rather than managing every small step.
- In the new SaaS world, high-usage, low-revenue-per-user products have a hidden advantage. Their constant use generates a rich stream of data and context, creating a powerful flywheel for grounding AI agents.
- The unit economics of AI chat are fundamentally more expensive than traditional search. Each chat query requires costly GPU cycles, unlike a cheap search index lookup, which explains the prevalence of subscription models.
- In the enterprise, "agents are the new seats." This signals a shift in software monetization, where value will be tied to the productivity and actions of AI agents, not just the number of human users.
- The key to unlocking productivity gains from AI isn't just adopting the tools, but fundamentally changing business processes to match the technology's capabilities. The organizations that learn to do this first will reap the biggest rewards.
- At massive scale, nothing is a commodity. Scale allows hyperscalers to learn faster, lower their cost structure, and protect margins, even in a highly competitive environment.
- State-by-state AI regulation creates a confusing patchwork that ultimately hurts startups the most, bogging them down in compliance while larger companies can afford to navigate the complexity.
The foundation of the Microsoft and OpenAI partnership
Microsoft's investment in OpenAI began in 2019 and totals around $13 to $14 billion, securing a significant ownership stake. However, Satya emphasizes that a particularly unique outcome of this partnership is the creation of the OpenAI Foundation, one of the largest nonprofits in the world. He notes that Microsoft is proud to be associated with both it and the Gates Foundation. The investment's success was not a foregone conclusion.
It's not what we thought and as I said to somebody, it's not like when we first invested our billion dollars that oh, this is going to be the hundred bagger that I'm going to be talking about to VCs about. But here we are.
Sam describes the collaboration as "one of the great tech partnerships ever," crediting Satya's early conviction as essential to their success. He doubts many others would have taken a similar risk given the uncertainty at the time. Their shared bet was on the potential of deep learning.
We didn't know at all how the tech was going to go. We just had a lot of conviction in this one idea of pushing on deep learning and trusting that if we could do that, we'd figure out ways to make wonderful products and create a lot of value and also, as Satya said, create what we believe will be the largest nonprofit ever.
Sam also praises OpenAI's unique corporate structure. It allows the nonprofit to grow in value while the for-profit entity secures the necessary capital to continue scaling. He believes this structure, and the partners who supported it, have been key to their achievements over the past six years.
Inside OpenAI's structure and its partnership with Microsoft
As part of a recent restructuring, OpenAI now has a nonprofit organization on top of its public benefit corporation. This nonprofit is capitalized with $130 billion of OpenAI stock, making it one of the largest in the world from its inception. The first $25 billion will be directed towards health, AI security, and resilience.
Sam Altman explained the rationale behind this focus. While capitalism is effective for distributing AI tools, some areas are not well-served by market forces. The nonprofit can pursue goals like using AI for rapid scientific discovery, such as curing diseases, and making the resulting data broadly available.
The best way to create a bunch of value for the world is hopefully what we're doing, which is to make these amazing tools and just let people use them. There are some areas where the market forces don't quite work for what's in the best interest of people.
The focus on resilience aims to help society navigate the transition to AGI, which may involve some choppiness. This includes funding work in cyber defense, AI safety research, and economic studies.
The deal structure with Microsoft includes a period of exclusivity. OpenAI's leading models, what Satya Nadella once called "stateless APIs," will be exclusively on Azure until 2032. This means platforms like ChatGPT or GPT-6 will not be on Amazon or Google clouds during this time. However, other products like open-source models, SORA, and agents can be distributed on other platforms. Additionally, OpenAI pays Microsoft a revenue share on all its revenues, which also runs until 2032.
Interestingly, both the exclusivity and the revenue-sharing agreement end if AGI is verified by an expert panel. While Satya Nadella has stated that no one is close to AGI, describing current progress as "spiky and jagged intelligence," Sam has sounded more bullish on the timeline. Regardless of when it happens, both see the partnership as durable.
Brad, to say the obvious, if we had superintelligence tomorrow, we would still want Microsoft's help getting this product out into people's hands.
OpenAI's massive bet on compute is fueled by confidence in future growth
A central question for OpenAI is how a company with reported revenues around $13 billion can manage compute spending commitments of $1.4 trillion over the next few years. Sam Altman responds by clarifying that their revenue is actually higher and growing steeply. He expresses confidence in their position, dismissing concerns about their spending plans.
Brad, if you want to sell your shares, I'll find you a buyer. I think there's a lot of people who would love to buy OpenAI shares... I think we could sell your shares or anybody else's to some of the people who are making the most noise on Twitter about this very quickly.
OpenAI's strategy is a forward bet on continued steep growth. They anticipate revenue from multiple streams, including ChatGPT, becoming a major AI cloud provider, a significant consumer device business, and creating value through AI that can automate science. Sam frames the massive spending as a necessary risk. The bigger risk, he suggests, is not having enough compute, which would prevent them from building the models needed to generate revenue at scale.
A certain risk is if we don't have the compute. We will not be able to generate the revenue or make the models at this kind of scale.
Satya Nadella, speaking as both a partner and investor, reinforces this confidence. He states that OpenAI has consistently exceeded every business plan it has presented, describing their business execution as "unbelievable." Both leaders agree that their companies are compute-constrained. Sam suggests that if OpenAI had ten times more compute, its revenue might not be far from ten times higher, a sentiment he notes Greg Brockman also shared on CNBC.
The relationship between AI compute, cost, and demand
The question of whether there will ever be enough compute is complex. Sam likens it to energy, where demand is inseparable from price. He explains that demand for compute cannot be discussed as a single number without considering different price levels.
If the price of compute per unit of intelligence, however you want to think about it, fell by a factor of 100 tomorrow, you would see usage go up by much more than 100. There'd be a lot of things that people would love to do with that compute that just make no economic sense at the current cost.
On the other hand, as models become vastly more intelligent and capable of solving major problems like curing cancer or discovering novel physics, society's willingness to pay a much higher cost for that intelligence will also increase. Satya agrees, emphasizing the need to improve efficiency. The goal is to maximize the economic value society gets from AI by increasing the output per dollar and per watt. This effectively commoditizes intelligence, making it a primary driver of GDP growth, a phenomenon reminiscent of the Jevons paradox where increased efficiency leads to increased consumption. However, Sam notes that the current reality is that intelligence scales almost directly with compute, though there is hope for discovering better scaling laws. The present-day compute constraint is highlighted by the fact that both Microsoft and Google's cloud businesses would have grown faster if they had more GPUs.
Navigating the complexities of a potential compute glut
While there is virtually no chance of a compute glut in the next two to three years, the long-term outlook is harder to predict. The primary bottleneck today isn't a shortage of chips, but rather a lack of power and infrastructure to support them. Satya explains the current challenge is getting data centers built quickly enough and close enough to power sources.
In fact, that is my problem today. It's not a supply issue of chips, it's actually the fact that I don't have warm shells to plug into.
Sam is certain a glut will eventually occur, whether in two to three years or five to six. He attributes this to human psychology, bubbles, and the complexity of the supply chain. Unforeseen technological shifts could dramatically alter the landscape. For example, a cheap new form of energy could devalue existing infrastructure contracts. Another factor is the rapid, exponential reduction in the cost per unit of intelligence. This is similar to the dot-com bubble of 2000-2001, where infrastructure was overbuilt, but the internet ultimately grew far beyond initial projections.
Software optimizations are also advancing at a staggering pace, potentially more exponentially than hardware improvements like Moore's Law. Sam envisions a future where powerful models can run locally on consumer devices.
Someday we will make an incredible consumer device that can run a GPT-5 or GPT-6 capable model completely locally at a low power draw.
To navigate this uncertainty, Satya emphasizes the strategy of building a fungible compute fleet. For a cloud provider, the two most important goals are having an efficient "token factory" and achieving high utilization. This requires the flexibility to schedule diverse workloads—from pre-training to inference—across the entire infrastructure.
Sam Altman on the possibility and appeal of an OpenAI IPO
When asked about reports of OpenAI planning to go public in 2026 or 2027, Sam clarifies that while there are no specific dates, he assumes it's where the company will eventually go. Brad lays out a potential scenario where if OpenAI achieves over $100 billion in revenue by 2027-2029, it could go public at a trillion-dollar valuation. He notes this would be based on a 10x revenue multiple, which is lower than what companies like Facebook had at their IPO. Such an offering could raise $100 to $200 billion to fund growth.
Brad suggests that another benefit of an IPO would be allowing retail investors, including his own children who use ChatGPT, to own a part of the company. Sam finds this idea very compelling.
That is probably the single most appealing thing about it to me. That would be really nice.
The problem with a patchwork of state AI regulations
A major concern in AI regulation is the lack of federal preemption, which would prevent a complicated patchwork of 50 different state laws. This approach can mire the industry in needless compliance and regulation. For example, the Colorado AI Act, going into effect soon, creates a new class of litigants who can claim harm from algorithmic discrimination, potentially for countless reasons.
Sam expressed significant worry about this state-by-state approach, calling it a "big mistake." He finds the Colorado law particularly challenging, stating he is unsure how a company like OpenAI is even supposed to comply with it.
I don't know how we're supposed to comply with that Colorado law. I would love them to tell us and we'd like to be able to do it. But that's just from what I've read of that, that's like I literally don't know what we're supposed to do.
Satya added that while large companies like Microsoft and OpenAI can likely navigate this complexity, the real victims are startups. This patchwork system runs counter to the goal of fostering innovation while ensuring safety. He argued that addressing safety concerns is crucial, but it should be done at the federal level. If the United States doesn't create a single regulatory framework, the EU will, which will introduce its own set of problems. Satya's hope is for alignment between the US and EU regulations, which he believes would benefit European startups as well, though he admits this is unlikely.
Envisioning the future of AI assistants and scientific discovery
Looking ahead to 2026, the evolution of AI tools like Codex is a key area of excitement. Sam expects these tools to advance from handling multi-hour tasks to multi-day tasks, enabling people to create software in fundamentally new ways. Beyond coding, a major breakthrough would be AI making novel scientific discoveries, even small ones. Sam views this as a critical step toward superintelligence.
If we can really get AI to do science here, that is super intelligence in some sense. If this is expanding the total sum of human knowledge, that is a crazy big deal.
Satya notes that the magic of ChatGPT was its combination of a powerful UI with an intelligent model. He sees a similar evolution for coding agents, which will lead to a new human-computer interface. This new paradigm is described as "macro delegating and micro steering," where a user assigns a large task and provides periodic guidance. This workflow requires new computing devices that are always on and contextually aware of a user's life, moving beyond current interfaces.
The conversation also highlights the immense potential of the consumer use case. Instead of navigating numerous apps and web forms, people could have a personal AI assistant available for free. This assistant could manage everyday tasks like ordering diapers or booking hotels. Brad suggests that sometimes the most mundane applications are the most impactful.
I think sometimes it's the pedestrian that's the most impactful.
This future involves AI moving beyond just providing answers to having memory and taking actions, all accessible through interfaces like earbuds, reducing the need to constantly stare at a screen.
How Microsoft's pivotal investment in OpenAI came to be
When asked about the pivotal 2019 decision to invest a billion dollars in OpenAI, Satya explained the journey started even earlier. Microsoft's involvement dates back to 2016, when OpenAI was focused on reinforcement learning with projects like the Dota 2 competition. The key factor, however, was Microsoft's long-standing obsession with natural language, a priority set by Bill Gates back in 1995. This created a "prepared mind" for what was to come.
The investment became a "no-brainer" for Satya when Sam Altman began focusing on text, natural language, and scaling laws in 2019, as it aligned perfectly with Microsoft's core interests. Still, proposing the deal to the board was a challenge, as it involved giving a billion dollars to what Satya described as a "crazy structure." There was debate, and Bill Gates was initially skeptical. His mind changed completely after seeing a GPT-4 demo.
When he saw it he said it's the best demo he saw after what Charles Simonyi showed him at Xerox PARC.
While the first billion was controversial, the real leap of faith was scaling the investment to ten billion dollars. Satya said the confidence for that larger commitment came from seeing early, tangible results like code completions in GitHub Copilot. He believes this second, larger investment is what "really made this entire era possible." The partnership's success is now evident in the collective reach of products like GitHub Copilot, ChatGPT, and Microsoft 365 Copilot.
The power of a converted skeptic's conviction
When Satya Nadella became CEO, Microsoft had missed major tech shifts like search and mobile, and nearly missed the cloud, only catching the "last train out of town." This history fueled a determination not to miss the next big thing. The company's CTO, Kevin Scott, who was based in Silicon Valley, was instrumental in this effort.
Satya places high value on the opinions of people who start as skeptics and then change their minds, viewing it as a powerful signal.
I always watch for people who are skeptical who change their opinion because to me that's a signal. So I'm always looking for someone who's a non believer in something and then suddenly changes and then they get excited about it. I have all the time for that because I'm then curious, why?
Kevin Scott was initially skeptical of the emerging AI approach, which defied traditional computer science by relying on scaling laws and massive compute power rather than finding a specific algorithm. However, once Kevin's opinion changed, his newfound conviction became a major driving force behind Microsoft's AI strategy and its partnership with OpenAI.
Microsoft's multi-layered value capture from the OpenAI partnership
The value of the Microsoft and OpenAI partnership extends far beyond the equity investment. The real strategic advantage lies in the exclusive availability of the stateless OpenAI API on Azure. This benefits both companies and their customers, as enterprises need a stateless API to integrate with their own compute, storage, and databases to build complete applications.
The second major area of value capture for Microsoft is the ability to incorporate OpenAI's intellectual property into its products. Satya notes that Microsoft has royalty-free access to this IP for another seven years, which provides significant business model flexibility.
It's kind of like having a frontier model for free in some sense if you're an MSFT shareholder. That's kind of where you should start from.
This allows Microsoft to deploy a powerful frontier model across its entire product suite, including GitHub, M365, and its consumer Copilot. The company can also post-train the models with its own data. This dual approach creates value on both the Azure infrastructure side and within high-value domains like healthcare, knowledge work, coding, and security.
Satya on the market's view of the OpenAI investment
Brad questions whether the market is misunderstanding the value of OpenAI as a component of Microsoft. He points out that Microsoft is consolidating billions in losses from the partnership, which may lead investors to attribute a negative value to it. This perception overlooks the immense benefits and the potential trillion-dollar equity value of OpenAI itself.
Satya explains that Microsoft's approach is to provide full financial transparency, using non-GAAP figures to give investors a clear view of the earnings per share. He offers a common-sense perspective on the risk involved with the investment.
If you've invested, let's call it $13.5 billion. You can, of course, lose $13.5 billion, but you can't lose more than 13 and a half billion dollars. At least the last time I checked, that's what you have at risk.
Satya acknowledges that their equity stake is currently illiquid and they have no plans to sell it, which comes with its own associated risks.
Satya Nadella explains Microsoft's fungible AI infrastructure strategy
Satya Nadella explains that the OpenAI partnership has driven growth across Microsoft's business. It attracted new customers to Azure from other clouds and introduced Copilot, the next major offering for Microsoft 365. He notes that Copilot's adoption is bigger than anything the company has previously done in its information work sector. Responding to the idea that Microsoft was too conservative with its AI infrastructure, Satya clarifies their strategy is to build a "fungible" fleet. This means the infrastructure is flexible across different AI workloads, geographies, and hardware generations, allowing for continuous modernization, such as shifting from GB200s to GB300s efficiently.
This strategy sometimes requires saying no to certain demands, even from partners like OpenAI. For instance, a request from Sam Altman for a massive, dedicated training data center might serve OpenAI's immediate needs but wouldn't align with Azure's long-term plan for a more flexible, broadly applicable infrastructure. This approach ensures Microsoft has the capacity for its own high-margin products, like the various Copilots for security, GitHub, and healthcare. Satya notes that investors often miss this broader strategy.
I find pretty strange and funny because I think they want to hold Microsoft because of the portfolio we have. But man, are they fixated on the growth number of one little thing called Azure.
Azure's growth is limited by supply, not demand
Azure's growth is impressive, reaching 39% in the quarter on a $93 billion run rate, outpacing competitors like GCP and AWS. However, this growth was limited by a shortage of compute capacity. Satya confirms that if more compute had been available, Azure's growth could have been even higher, potentially 41% or 42%.
This situation is the result of a deliberate strategy. Microsoft is currently supply-constrained, not demand-constrained. The company is actively managing its resources to balance the long-term interests of shareholders, serve a broad range of customers, and support internal research as well as major partners like OpenAI. This approach helps avoid concentration risk by ensuring they don't rely too heavily on any single customer.
We are not demand constrained, we are supply constrained. So we are shaping the demand such that it matches the supply in the optimal way with the long term view.
At scale, nothing is a commodity
Microsoft has a $400 billion backlog of remaining performance obligations with a short, two-year average duration. Satya Nadella explains this gives them high certainty for their capital spending, as they are building out capacity simply to serve this existing demand. The backlog is diversified across Microsoft's internal needs and third-party customers, who are increasingly building large-scale workloads.
Despite new competitors entering the market and potentially compressing margins, Microsoft is used to intense competition from Amazon and Google. Satya emphasizes that scale is the critical factor in this environment. The partnership with OpenAI has provided immense scale, which helps Microsoft learn faster, lower its cost structure, and remain price-competitive while protecting margins.
At scale, nothing is a commodity.
Satya views Microsoft's capital allocation for the cloud as a single outlay for a unified platform. This investment serves everything from Xbox Cloud Gaming and Microsoft 365 to Azure. The goal is not to run separate businesses but to have one platform that compounds the returns on cloud and AI investments across the entire company.
Addressing concerns about circular revenue in AI partnerships
Concerns have been raised about the sustainability of AI revenues, particularly regarding overlapping deals and so-called circular revenues, where companies invest in each other to generate sales. Satya addresses this by clarifying Microsoft's relationship with OpenAI. He explains that Microsoft's $13.5 billion investment was for training compute and was not booked as Azure revenue. This investment is what secured their equity stake in OpenAI. The revenue Microsoft sees from OpenAI in its Azure business is purely from the consumption of services like ChatGPT and its APIs.
Satya notes that vendor financing isn't a new concept, although some current arrangements in the industry are taking on exotic forms that warrant scrutiny from investors. He points out that Microsoft has not needed to engage in these types of deals. Their support for OpenAI involved either trading compute for equity or providing favorable pricing to help the company get started.
Ultimately, circularity will be tested by demand because all this will work as long as there is demand for the final output of it. And up to now that has been the case.
Why low ARPU, high usage products will win in the AI era
The fundamental architecture of SaaS applications is changing in the age of AI. Satya explains that a new "agent tier" is replacing the traditional business logic tier that sat on top of databases. In the past, SaaS applications tightly coupled data, logic, and the user interface. AI requires these to be decoupled, making data and context engineering critically important.
Products with high usage are best positioned for this shift. Satya points to Microsoft 365, which has low average revenue per user (ARPU) but extremely high usage. This constant use of tools like Outlook, Teams, and Word generates a massive amount of data that feeds the Microsoft Graph. This data, including new artifacts like chat conversations, is then used to ground AI agents, providing them with necessary context. Paradoxically, AI is leading to all-time highs in data creation, with more code going into GitHub and more documents being created in Microsoft 365.
If you are high ARPU, low usage, then you have a little bit of a problem. But we are the exact opposite. We are low ARPU, high usage.
This dynamic allows AI to become an accelerant. It enables new, high-value products like the M365 copilot, which is seeing rapid deployment despite its high price point. The scale of this transformation is immense. For example, the amount of coding activity on GitHub in the last year alone rivals what occurred in the platform's first 10 to 15 years.
The AI factory and the role of the hyperscaler
Until recently, clouds primarily ran pre-compiled software, with most of the value accruing to the software layer, such as databases, CRM systems, and applications like Excel. However, in the future, interfaces will only be valuable if they are intelligent, able to think, act, and advise. This capability requires the constant production of tokens to deal with ever-changing contexts.
This shift suggests that more value will accrue to the "AI factory"—the combination of hardware and models that produce these tokens at the lowest cost. Consequently, the agents or software built on top might capture less value than they have in the past.
Satya Nadella agrees that the "token factory" is a crucial driver of AI's value. He clarifies that this factory is more than just hardware; it's about running the entire system with maximum efficiency and utilization. This is the core role of a hyperscaler. The complexity of this task is evidenced by the fact that only three hyperscalers exist. It is not as simple as just buying and wiring up servers; it requires deep operational know-how.
The hardware landscape will be heterogeneous, involving contributions from NVIDIA (Jensen Huang), AMD (Lisa Su), Broadcom, and proprietary silicon from the hyperscalers themselves. The ultimate goal is to operate this diverse fleet to maximize token throughput and efficiency.
The agent factory and the new marginal cost of software
A modern SaaS application can be thought of as an "agent factory." Its purpose is to drive a specific business outcome by using AI tokens in the most efficient way possible to generate value. These new SaaS applications are intelligent systems optimized for a set of evaluations and outcomes.
In fact, GitHub Copilot is a great example of it, which is, if you think about it, the auto mode of GitHub Copilot is the smartest thing we've done. So it chooses based on the prompt which model to use for a code completion or a task handoff.
This selection process isn't random. It's guided by a constant feedback cycle, including evaluations and data loops. The value of the SaaS application lies in its ability to intelligently manage trade-offs, such as balancing latency with performance. This new model reintroduces a significant marginal cost to software, a cost that was minimal with CD-ROMs, grew with the cloud, and is now much higher. Consequently, business models must adjust, requiring separate optimizations for the agent factory and the underlying token factory.
AI is bending the productivity curve and creating new ways to work
The unit economics of AI chat are fundamentally different from traditional search. Search has been incredibly profitable because its main cost is a fixed index that can be amortized efficiently. Completing a search costs a fraction of a penny. In contrast, each chat interaction requires significant GPU cycles for processing intent and retrieving information, making it more expensive. Satya explains this difference is why early chat models have relied on freemium and subscription business models, especially on the consumer side. The ideal monetization method for chat, like an "ad unit," is still being figured out.
While chat agents could deliver immense value to humanity, the path to monetization is clearer in enterprise than it is for consumers. The consumer space is more challenging because people have a finite amount of time, and monetization often relies on direct human interaction. Enterprise, however, is not a winner-take-all market and is more suited for agentic interactions. Satya notes that in the enterprise world, "agents are the new seats," suggesting new monetization models are emerging.
AI is expected to significantly bend the productivity curve, ushering in what Satya calls a "golden age of margin expansion." These tools give individuals more agency to complete tasks, changing workflows across industries. At Microsoft, employees are being equipped with unlimited access to tools like Microsoft 365 and GitHub Copilot to maximize their productivity. However, realizing these gains requires learning a new way to work with AI agents.
The rate of the pace of change of the business process that matches the capability of AI. That's where the productivity efficiencies come. And so organizations that can master that are going to be the biggest beneficiaries.
This shift is comparable to how tools like Excel transformed business processes, such as forecasting, in the 1980s. Previously, forecasts were done with interoffice memos. Then, spreadsheets and email created a new, more efficient workflow. Similarly, today's work is starting with AI for research, ideation, and collaboration, creating new artifacts and processes. The organizations that adapt and master these new AI-driven workflows will see the greatest benefits.
AI creates productivity gains through workforce leverage
Satya Nadella highlights how AI is changing work at Microsoft through real-world examples. He shared a story about the person leading Microsoft's network operations, who manages massive amounts of fiber optic cable for data centers and deals with 400 different fiber operators worldwide.
She basically said to me, 'There's no way I'll ever get the headcount to go do all this. Even if I approve the budget, I can't hire all these folks.' So she did the next best thing. She just built herself a whole bunch of agents to automate the DevOps pipeline of how to deal with the maintenance.
This illustrates how a team can gain more productivity using AI tools. Satya believes Microsoft's headcount will still grow, but that new headcount will have significantly more leverage than before AI. The key adjustment is not just what people do, but how they do it. This process of unlearning old workflows and learning new ones is expected to take the next year or so.
This shift points toward incredible economic productivity growth. Companies will be able to grow their employee numbers slower than their revenue, which is the core of productivity gains. The resulting consumer surplus can then be invested in creating new products and services that didn't exist before. Satya agrees, pointing to software development as another example. AI agents can help tackle the huge IT backlogs in organizations, potentially making the dream of evergreen software a reality. The levels of abstraction at which knowledge work happens are changing, and the entire system will adjust.
Tech investment is driving the reindustrialization of America
A massive wave of capital expenditure is driving the reindustrialization of America. US tech companies are investing approximately $4 trillion over the next four to five years, an amount roughly ten times the size of the Manhattan Project on an inflation-adjusted basis. This effort is supported by government policy aimed at reshaping trade deals and attracting foreign capital, such as a recent $350 billion investment commitment from South Korea.
Satya Nadella expressed optimism about this trend, using Microsoft's Wisconsin data center as an example. He explained that the economic impact extends far beyond the automated facility itself.
The economy around our Wisconsin data center is fascinating... what went into the construction of that data center and the local supply chain of the data center, that is in some sense the reindustrialization of the United States as well. Even before you get to what is happening in Arizona with the TSMC plants or what was happening with Micron and their investments in memory or Intel and their fabs.
This reindustrialization focuses on building new capacity for the modern economy, from power grids to advanced manufacturing. At the same time, US tech giants, or hyperscalers, are also the world's largest investors in compute and token factories globally. This creates a powerful dynamic: America is attracting foreign capital for domestic industrialization while simultaneously deploying its own capital and technology to help other regions innovate. This dual strategy is seen as highly beneficial for the long-term prospects of the United States. There is a palpable sense of hope fueled by what appears to be a new level of coordination between Washington D.C. and Silicon Valley.
Resources
- CNBC (TV Channel)
- Dota 2 (Video Game)
