Dwarkesh Podcast artwork

Dwarkesh Podcast

Satya Nadella — How Microsoft is preparing for AGI

Nov 12, 2025Separator23 min read

Microsoft CEO Satya Nadella discusses how the company is preparing for the age of AGI.

He explains the fundamental shift from selling software tools to providing infrastructure for autonomous agents, and details the massive datacenter investments required to power this new economy.

Key takeaways

  • AI's primary utility can be understood through the metaphor of a 'guardian angel' or a 'cognitive amplifier', framing it as a powerful tool to augment human capability.
  • The rapid advancement of technology like AI could compress 200 years of industrial revolution-level change into just 20 years.
  • The high computational cost of AI is disrupting the traditional software-as-a-service (SaaS) model, which was built on low incremental costs per user. Future business models will need to blend subscriptions with consumption-based tiers.
  • The current AI transition mirrors the shift to the cloud. What was initially feared to be a margin-shrinking move instead resulted in massive market expansion.
  • A smaller share of a dramatically expanding market can be far more valuable than a dominant share of a smaller one.
  • The future of AI value is a tug-of-war. One view is that powerful models will become autonomous coworkers and capture all the margin. The other is that the applications controlling user data will commoditize the underlying models.
  • Pure model companies may face a "winner's curse." Their innovations are one copy away from being commoditized, while the companies with data and user context can leverage open-source alternatives to build value.
  • The business model for software is shifting from selling tools to end-users to providing the underlying infrastructure for autonomous AI agents.
  • The market will expand from a 'per user' model to a 'per user and per agent' model, where every autonomous agent is provisioned with its own virtual computer, identity, and security.
  • Building AI infrastructure optimized for only one model is risky. A competitor's breakthrough in model architecture could render the entire investment obsolete, making it crucial to support multiple model families.
  • The AI market is unlikely to be a 'winner-takes-all' scenario. Similar to the database market, it will likely fragment, with various specialized models excelling in different domains, geographies, and use cases.
  • The long-term value of a hyperscale AI business lies in serving a diverse, long tail of customers and workloads, not just providing bare-metal computing for a few large players.
  • What distinguishes a hyperscaler from a traditional data hoster is the sophisticated software layer that optimizes workloads and drives capital efficiency, leading to massive performance gains on the same hardware.
  • The push for sovereign AI ensures that multiple AI models, including open-source alternatives, will coexist. This prevents concentration risk and gives countries the freedom to switch providers without being locked into a single ecosystem.
  • Nation-states are now prioritizing supply chain resilience and self-sufficiency, forcing multinational companies to treat sovereignty as a primary business requirement.
  • In the global competition with China, trust may be American tech's most important feature. The world's ability to trust a company and its country's institutions could be the ultimate competitive advantage.

Podchemy Weekly

Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.

Satya Nadella on scaling AI and its economic diffusion

00:00 - 07:31

Microsoft's new Fairwater 2 data center is an example of the massive scaling required for the future of AI. The goal is to increase training capacity by a factor of 10 every 18 to 24 months. The sheer scale is immense; the network optics in this single building are nearly equivalent to all of Azure's data centers from just a few years ago. This infrastructure is designed to aggregate computing power, or flops, across different sites and even regions, linking facilities in Iowa with others in Wisconsin to run a single, massive training job.

Satya Nadella explains that building for this future is complex. It involves designing a physical plant that is optimized for current model architectures but is also flexible enough for future chips, which will have very different power and cooling requirements. The strategy is to scale over time rather than build to a single specification and risk being stuck with outdated infrastructure.

While the speed and scale of the AI transition are unprecedented, Satya offers a grounded perspective. He agrees that this may be the biggest technological shift since the Industrial Revolution, but he emphasizes that it is still in its early stages. He is optimistic that the scaling laws driving AI progress will continue to work, but he views AI primarily as a tool for human utility. He frequently uses a metaphor from Turing Award winner Raj Reddy, describing AI as either a "guardian angel or a cognitive amplifier."

Regarding the economic impact, Satya draws parallels to the Industrial Revolution. Even with rapid technological diffusion, it took about 70 years for widespread economic growth to materialize. True growth only occurs when the technology becomes so integrated that it fundamentally changes the nature of work and workflows. He argues that the change management required for corporations to truly adapt and benefit from AI is a significant hurdle that should not be underestimated.

How AI is changing the software as a service business model

07:32 - 11:04

Technology provides humans with immense leverage. Just as a podcast can achieve a scale that was previously impossible, future advancements will continue to amplify individual impact. This acceleration could compress what took the industrial revolution 200 years into a 20 or 25-year period.

This rapid change presents new challenges for established business models, particularly for software-as-a-service (SaaS) companies like Microsoft. Historically, Microsoft transitioned from selling software licenses to a subscription model like Office 365. This model thrived on low incremental costs per user. However, the rise of AI disrupts this. Dylan points out that the high cost of goods sold (COGS) for AI services breaks the traditional SaaS business model, which explains why many SaaS companies have underperformed in the markets.

Satya Nadella addresses this shift by explaining that the fundamental business model levers will likely remain the same. These include advertising, transactions, device gross margins, subscriptions, and consumption-based pricing. He views subscriptions as entitlements to a certain amount of consumption that help users budget. The key is in the pricing and packaging.

So how much consumption you are entitled to is... If you look at all the coding subscriptions, that's kind of what they are, right? And they kind of have the pro tier, the standard tier and what have you. And so I think that's how the pricing will and the margin structures will get tiered.

Satya suggests that future software pricing will involve tiered subscriptions that grant different levels of AI consumption. He notes that Microsoft is well-positioned for this transition because it already operates across all these different business models, from consumption to subscriptions and consumer services. Ultimately, time will reveal which models are most effective for different categories.

GitHub's platform strategy in the expanding AI coding market

11:05 - 18:01

Satya Nadella reflects on the transition from on-premise servers to the cloud, drawing a parallel to the current AI shift. Initially, Microsoft worried that moving Office users to the cloud would shrink margins. However, the opposite happened.

The move to the cloud expanded the market like crazy. We sold a few servers in India, didn't sell much, whereas in the cloud, suddenly everybody in India also could afford fractionally buying servers.

The cloud lowered IT costs and expanded the market massively. Satya believes the same dynamic is happening with AI. For example, the AI coding assistant market, which GitHub Copilot pioneered, grew from nothing to a huge category in just one year. Dylan Patel notes that while GitHub Copilot's revenue was around $500 million early in the year, the total market including competitors like Claude and Code Cursor is now at a run rate of five to six billion dollars.

Satya views this new competition as a positive sign. He is happy to see new companies emerging rather than competing with old rivals.

When you say, 'Claude's going to kill you, Cursor is going to kill you,' it's not Borland, right? So thank God, that means we are in the right direction.

He notes that even with competition, GitHub Copilot's subscriber base is growing, and more importantly, GitHub itself is seeing all-time highs in activity. Regardless of which coding assistant developers use, the code often ends up in GitHub repos. Looking forward, Microsoft's strategy is to position GitHub as a central hub for all AI agents. The plan is to build an "Agent HQ" or "Mission Control" within GitHub.

I can fire off, sometimes I describe it as the cable TV of all these AI agents, because I'll have essentially packaged into one subscription: Codecs, Claude, Cognition stuff, anyone's agents, Grok, all of them will be there.

This platform would allow developers to issue tasks to multiple agents, monitor their work, and manage the output, solidifying GitHub's role as the essential control plane for AI-driven software development.

Microsoft's strategy is a smaller share of a much larger market

18:02 - 20:01

The total addressable market for software agents could be the $2 trillion currently paid in wages, or potentially even larger. In this emerging market, Microsoft's share has dropped from over 50% to under 25% in just one year. Satya acknowledges this shift, stating there is no "birthright" to market dominance and the key is to innovate.

The optimistic view is that this new category will be significantly larger than any market where Microsoft previously had a high share, such as with VS Code or GitHub. A smaller share of a much more expansive market can be more valuable. Satya draws a parallel to a previous technological shift.

We had a high share in client server computing. We have much lower share than that in hyperscale. But is it a much bigger business? By orders of magnitude. So at least there's existence proof that Microsoft has been okay even if our share position has not been as strong as it was as long as the markets we are competing in are creating more value and there are multiple winners.

Microsoft's strategy is to remain competitive and create a platform with GitHub HQ where multiple agents can succeed, rather than assuming a single winner-take-all scenario.

Where AI value will accrue: models vs. scaffolding

20:02 - 28:04

A key question in the AI industry is where the value will ultimately migrate: to the model companies or to the applications and platforms that provide the "scaffolding." One vision suggests that as AI models become more capable, they will function as autonomous coworkers, performing days' worth of work. In this future, the model companies would capture most of the profit, charging thousands of dollars for access.

Satya Nadella presents a different perspective. He argues that the incentive structure favors those who control the scaffolding. Using GitHub Copilot as an example, he explains how these applications can optimize tasks by arbitraging tokens across various models. If you win the scaffolding, you control the data liquidity. This allows you to vertically integrate by taking an open-source model checkpoint and training it with your proprietary data. Satya believes this creates a potential "winner's curse" for pure model companies.

They may have done all the hard work, done unbelievable innovation, except it's kind of like one copy away from that being commoditized and then the person who has the data for grounding and context engineering and the liquidity of data can then go take that checkpoint and train it.

Dylan Patel notes that despite this risk, model companies like Anthropic have seen their margins expand significantly, even with increasing competition. Satya explains that Microsoft's approach is not just to wrap a UI around a model. Instead, they are deeply integrating AI into the core of their applications. For instance, an "Excel Agent" is being built into the middle tier of the Office system. This agent is taught to natively understand all of Excel's tools and artifacts, effectively bundling an analyst into the software. This deep integration gives them leverage; if a model provider prices its services too high, Microsoft can substitute it with a competitor or an in-house model.

From a per-user to a per-agent business model

28:04 - 32:54

Future AI models should not be thought of as they exist today, but as agents that can use a computer as proficiently as a human knowledge worker. A human can look into Excel formulas, use alternative software, and migrate data. If an AI can do the same, its deep integration with any single application like Excel becomes less critical. The computer itself is the real tool.

Satya Nadella outlines two potential futures for how we will work. The first is an evolution of the present, where a human uses tools like copilots and agents but remains in control, steering everything. The second future involves companies provisioning computing resources for fully autonomous AI agents that work on their own.

These autonomous agents will still need tools to be efficient. It is more token-efficient for an AI to use specialized tools than to operate on a raw computer for every task. This insight leads to a fundamental shift in the business model. Satya explains that what is currently an end-user tools business, like Microsoft 365, will become an infrastructure business that supports AI agents.

The business, which today is an end user tools business, will become essentially an infrastructure business in support of agents doing work.

The underlying infrastructure built for services like M365—including storage, archival, discovery, and management—will be crucial for these agents. An AI agent needs a place to store data, an identity, and security, just like a human user. This means offering lower-level, more efficient access to this infrastructure. Companies building autonomous agents are already seeking to provision Windows 365 for them, effectively giving each agent its own computer.

This transforms the business model from being 'per user' to 'per user and per agent'. The number of agents could grow much faster than the number of human users, creating a vast new market for provisioning a full stack of computing resources for each one.

Microsoft's strategy for AI models and infrastructure

32:54 - 39:45

AI models are being trained not only to use tools like Excel or book flights, but also to perform valuable data migration tasks. This includes converting mainframe systems to the cloud or turning Excel sheets into proper SQL databases. Dylan Patel raises the question of how this impacts Microsoft. If AI agents can migrate systems away from the Office ecosystem, its use could decline significantly.

Satya Nadella responds that we will live in a hybrid world for a long time, where humans and AI agents interact with tools and each other. He argues that even when agents work with other agents, they will still rely on fundamental infrastructure primitives. These primitives are the core underlying rails for systems today, and they will be essential in the future as well.

Do you need a storage system? Does that storage system need to have E discovery? Do you need to have observability? Do you need to have an identity system that is going to use multiple models with all having one identity system? So these are all the core underlying Rails we have today for what are Office systems or what have you. And that's what I think we will have in the future as well.

Satya explains that better structuring of data, like giving all of Excel a proper database backend, will actually grow the infrastructure business. The consumption of this infrastructure will just be driven by agents.

The discussion then turns to Microsoft's own AI model development. While Microsoft heavily utilizes OpenAI models, it is also building its own. Satya clarifies their strategy is to avoid redundant work. He doesn't want to use Microsoft's computing resources (flops) in a way that is just duplicative of what OpenAI's GPT family can do. Instead, Microsoft's AI team (MAI) focuses on specific areas. For example, they launched an image model optimized for cost and an audio model with personality for Copilot. Their first text model was intentionally small, designed to prove their core capabilities. The next step for MAI is to build an omni-model combining their work in text, image, and audio. The overall goal is to build a world-class superintelligence team while fully exploiting the advantage of having access to OpenAI's models.

Microsoft's strategy for a multi-model AI future

39:45 - 47:46

In the competitive landscape of AI, talent is a major focus. Companies like Meta and Anthropic are spending billions to acquire top reasoning and research teams from rivals like Google. This raises the question of how Microsoft ensures it can maintain a leading AI lab, especially if its access to OpenAI models were to change in the future.

Satya Nadella states that Microsoft is actively assembling a world-class AI team, bringing in talent like Mustafa Suleyman, Karen Simonyan, and others who have led significant projects at Gemini and DeepMind. Microsoft's strategy is multi-faceted. First, as a hyperscaler, it aims to build the most robust infrastructure capable of supporting a wide range of models, including open-source projects and those from partners like OpenAI. Second, while continuing to integrate OpenAI's models into its products, Microsoft is also developing its own proprietary models and using other frontier models, such as Anthropic's in GitHub Copilot. Ultimately, the performance of the product for a specific task will determine the necessary level of vertical integration.

A key consideration for the future is the concept of continuous learning, where models, like humans, learn on the job. If a single model becomes widely deployed and continuously learns from all its interactions, it could create an exponential feedback loop, or an "intelligence explosion," effectively winning the market. Satya acknowledges this possibility, stating that if one model sees all the data and learns continuously, it's "game, set, match."

However, he believes this scenario is unlikely. The reality, even today, is that no single model dominates every use case. He draws an analogy to the database market, where despite the theoretical appeal of one universal database, multiple specialized databases thrive for different purposes. Similarly, he predicts the AI landscape will remain fragmented across different domains, geographies, and segments, creating opportunities for many players. This outlook shapes Microsoft's infrastructure strategy.

You can't build an infrastructure that's optimized for one model. If you do that, what if you go fall behind? In fact, all the infrastructure you build will be a waste. You kind of need to build an infrastructure that's capable of supporting multiple sort of families and lineages of models. Otherwise the capital you put in which is optimized for one model architecture, that means you're one tweak away from some breakthrough that happens for somebody else and your entire network topology goes out of the window.

The industry structure, Satya argues, will force specialization. Microsoft intends to compete on its merits at each layer—infrastructure, models, and product scaffolding—rather than pursuing a winner-take-all vertical integration strategy. Success as a platform company requires a thriving ecosystem, which prevents any single entity from owning every category.

Microsoft's strategy for a flexible AI infrastructure

47:48 - 52:40

In the latter half of last year, Microsoft made a significant strategic shift by pausing its aggressive data center expansion. Despite being on a path to surpass Amazon as the largest infrastructure provider, Microsoft let go of multiple leasing sites, which were then picked up by competitors like Google, Meta, and Amazon. Satya Nadella explained this was a deliberate course correction rooted in a long-term vision for Azure.

The primary goal was to ensure the 'fungibility' of Microsoft's computing fleet. Rather than building massive capacity tied to a specific generation of hardware, the focus shifted to creating a flexible infrastructure capable of handling all stages of AI, from training to inference, for a diverse range of models and customers. This prevents getting stuck with technology that will soon be outdated, as new chips like the GB200 and beyond will have vastly different power and cooling requirements.

Furthermore, the strategy was to build a balanced and sustainable hyperscale business. This means not just focusing on providing massive training capacity for one partner, but also on serving those models globally to a wide customer base, which is how the business is monetized. Nadella emphasized that Microsoft is not in the business of being a dedicated host for a single company.

We didn't want to just be a host stop for one company and have just a massive book of business with one customer. That's not a business. That is sort of, you should be vertically integrated with that company.

Ultimately, Microsoft's vision for Azure is to be a long-tail business for AI workloads. While being competitive in high-end training is crucial, it cannot crowd out the rest of the business. Much of the margin comes from the auxiliary services surrounding the core AI accelerators. The adjustment in strategy reflects a commitment to building a durable, diversified cloud business for the entire AI ecosystem.

Microsoft's long-term strategy for its hyperscale business

52:40 - 56:40

The location of AI infrastructure is not entirely fungible, even as AI tasks become longer and more asynchronous. The topology of data center regions and the networking between them is crucial. This is shaped by evolving AI model capabilities and usage patterns, which can be either synchronous or asynchronous. Data residency laws also play a significant role. For example, the entire EU has an EU data boundary, which means a call cannot simply be round-tripped to any location, necessitating regional infrastructure.

Other factors like latency and proximity to other services are also critical. If an autonomous agent needs to access a Cosmos DB for session data, that storage must be located nearby. These considerations ultimately shape the strategy of a hyperscale business.

When questioned about Oracle's rapid growth in the cloud space and whether Microsoft is missing out, Satya Nadella explained Microsoft's long-term strategic thinking. He clarified that Microsoft's goal is not to chase every opportunity, especially those that don't align with their core strategy.

It didn't make sense for us to go be a hoster for one model company with limited time horizon RPO. The thing that you have to think through is not what you do in the next five years, but what do you do for the next 50.

Satya emphasized that while he tracks competitors, Microsoft's decisions are not driven by chasing their metrics or short-term gross margins. The focus is on identifying and capturing the business that Microsoft is uniquely positioned to win. This long-term industrial logic guides their investments and partnerships.

I have to chase them for not just the gross margin that they may represent in a period of time. What is this book of business that Microsoft uniquely can go clear, which makes sense for us to clear and that's what we'll do.

Satya Nadella explains Azure's AI platform strategy

56:40 - 1:05:31

Satya Nadella explains that Azure's business strategy is not just about providing bare metal infrastructure to a few large AI labs like OpenAI. The focus is on serving the "long tail" of enterprise customers by providing a comprehensive platform. A real-world application workload involves more than a simple API call to a model; it requires integration with databases, storage, and other compute resources. Azure aims to be the platform where customers can access various models, including those from OpenAI and open-source options, and build complete applications around them.

A real workload is not just I did an API call to a model. A real workload needs all of these things to go build an app or instantiate an application.

Capacity planning for this infrastructure is complex. It involves balancing multiple factors beyond just total power, such as geographical location to meet data sovereignty and regulatory needs. Another critical factor is timing. Microsoft aims to align its infrastructure builds with the rapid pace of new hardware releases from partners like Nvidia to avoid being locked into depreciating, older-generation technology. This approach, which Jensen Huang calls "speed of light execution," allows them to scale with each new generation of technology efficiently.

To manage capacity constraints, Microsoft also rents capacity from other providers, or "NEO clouds." Satya views this as a flexible solution and welcomes these providers into the Azure marketplace. This creates a scenario where a customer can use the NEO cloud's specialized hardware through Azure, which in turn provides all the surrounding database and storage services.

Dylan Patel questions Microsoft's progress on developing its own custom AI accelerators compared to Google and Amazon. Satya explains that their strategy for their own silicon, like the Maya chip, is to create a "closed loop" between Microsoft's own AI models and the chip's microarchitecture. He reveals that Microsoft has full access to OpenAI's intellectual property, except for consumer hardware, which is a significant advantage for their internal chip development program.

Microsoft's strategy as a key partner for Nvidia and OpenAI

1:05:31 - 1:06:31

Microsoft has a significant partnership with Nvidia, having built supercomputers together. As Nvidia innovates at the system level, Microsoft gains access to that innovation. Microsoft first implements what Nvidia builds and then extends it further. Satya describes Microsoft's role as a "fantastic, speed of light execution partner for Nvidia," emphasizing the critical importance of their hardware fleet. While acknowledging Nvidia CEO Jensen Huang's high margins, Satya's focus is on the Total Cost of Ownership (TCO), which has many dimensions. Microsoft aims to be exceptional at managing TCO. This strategy is complemented by their work on system design with partners like OpenAI, leveraging their mutual intellectual property rights.

Satya Nadella clarifies the exclusivity of OpenAI's API on Azure

1:06:32 - 1:09:35

Satya Nadella clarified the specifics of Microsoft's agreement with OpenAI, particularly concerning exclusivity. He explained that a strategic decision was made to differentiate between OpenAI's PaaS (Platform as a Service) and SaaS (Software as a Service) businesses. OpenAI's PaaS business is its API, which is exclusive to Azure. Their SaaS business is ChatGPT, which they can run anywhere and partner with anyone to build other SaaS products.

If any partner wants to use OpenAI's stateless API, they must use Azure. This exclusivity extends to custom agreements as well. For example, if a company like Salesforce wants to integrate OpenAI by training a model together, they would generally have to run it on Azure. There are a few exceptions, such as for the US government, but the core API business remains tied to Microsoft's platform.

How software drives capital efficiency for hyperscalers

1:09:35 - 1:12:30

Microsoft is undergoing a massive structural change, transforming from purely a software business into one that is both capital-intensive and knowledge-intensive. This shift is driven by the huge capital expenditures required for AI infrastructure, which has seen capex triple over the last two years. As other hyperscalers take on significant debt to fund this expansion, Microsoft's strategy is to use its knowledge and software expertise to increase the return on its capital spending.

Satya Nadella explains that this is what separates a modern hyperscaler from a classic hosting company: the software layer. The ability to optimize by workload and across the entire fleet is crucial. For example, software improvements can drastically increase the efficiency of AI models, a concept Satya highlights with a specific example.

For a given GPT family, the software improvement of throughput in terms of tokens per dollar per watt that we are able to get quarter over quarter, year over year is massive. It's 5x, 10x, maybe 40x in some of these cases just because of how you can optimize. That's knowledge intensity coming to bring out capital efficiency.

This knowledge intensity, demonstrated through sophisticated workload scheduling and management, is what Microsoft must master. While the business is changing, Satya believes Microsoft will fundamentally remain a software company, leveraging its strong cash flow to succeed in this new, capital-intensive environment.

1:12:30 - 1:21:41

When considering massive AI investments, a hyperscaler must allocate a budget for "research compute" as an R&D expense. Satya suggests this portion needs to scale ahead of demand, similar to how talent for AI is at a premium. The rest of the investment can be demand-driven, even while independent labs project huge revenues to secure funding. Satya notes that these labs have shown real traction, and Microsoft has significant business with them.

Dylan Patel raises a critical point about the changing geopolitical landscape. In the past, American tech stacks like Windows dominated globally. Today, however, governments worldwide care deeply about AI, leading to the rise of "sovereign AI" efforts in Europe, India, and other regions, alongside the US-China bipolarity. This presents a new challenge for companies like Microsoft, which can no longer assume the right to win everywhere.

Satya responds that the top priority for the US tech sector is to build trust around the world for its technology. He highlights the unique position of the United States.

It's 4% of the world's population, 25% of the GDP and 50% of the market cap. And I think you should think about those ratios and really reflect on it. That 50% happens because quite frankly the trust the world has in the United States, whether it's its capital markets or whether it's its technology.

To maintain this trust, American companies should invest directly in other countries, building AI factories all over the world. Microsoft is already doing this by making specific commitments to Europe, building sovereign clouds in France and Germany, and using technologies like confidential computing to address data residency and privacy concerns. This combination of technical and policy measures is designed to give countries sovereignty over their data.

Ultimately, every country's main goal is to use AI to create economic value. However, they will also want to ensure continuity and avoid concentration risk. This is why there will always be multiple AI models, including open-source options. It provides countries with the flexibility to move their data and workloads to different models, preventing dependency on a single provider.

Sovereignty, resilience, and trust in a bipolar tech world

1:21:41 - 1:27:45

The market structure for technology is increasingly driven by concentration, risk, and sovereignty. Dylan Patel notes that true sovereignty in critical areas like semiconductors is still elusive. For example, even with new fabs, TSMC Arizona is not replacing a significant fraction of production, making the idea of full national sovereignty somewhat of a 'scam'.

Satya Nadella counters that nation-states are now prioritizing resilience and self-sufficiency in their critical supply chains. This shift is a new reality that multinational companies must treat as a first-class requirement. While globalization cannot be instantly reversed, companies must respect and plan for the long-term policy interests of the countries they operate in. This includes requests like allocating more wafer starts to fabs in the U.S. or establishing specific data boundaries for the EU.

Globalization was fantastic. It helped the supply chains be globalized and be super efficient. But there's such a thing called resilience. And we want resilience. And so therefore that feature will get built.

As the world becomes more bipolar with US-China competition, the conversation turns to how American tech can compete, especially as China has a comparative advantage in large-scale infrastructure build-outs. Satya suggests that trust may become the most important competitive feature. The ability for the world to trust an American company and its country's institutions as a long-term supplier could be the ultimate differentiator.

Podchemy Logo