Sam Altman, CEO of OpenAI, joins the podcast to discuss his company's strategy to win in a tightening AI race.
He explains how product, infrastructure, and a deliberate consumer-first approach create a competitive advantage, and shares his vision for the future of AI devices and superintelligence.
Key takeaways
- OpenAI treats competitive threats like a pandemic, believing that paranoid, swift, and early action is far more valuable than a delayed response.
- The real value in AI will come from building entirely new, AI-first products, not just bolting AI features onto existing ones. This creates an opening for new companies to compete with incumbents.
- Personalization is a powerful competitive advantage in AI. When a model learns about a user or a company's specific data, it becomes deeply integrated and difficult to replace.
- The future of AI interfaces may be more dynamic and proactive, with AI generating task-specific UIs and working continuously in the background to help users achieve their goals.
- More people than expected want a deep, companion-like relationship with AI, and while users will have choice, the societal and commercial implications of this are still unknown and potentially concerning.
- OpenAI's initial 'consumer first' strategy was deliberate, based on the belief that winning in the consumer space makes it significantly easier to succeed in the enterprise market.
- According to OpenAI's 'gdp eval', the GPT-5.2 Pro model can perform over 74% of knowledge work tasks at a level preferred by human experts.
- Current AI excels at small, well-scoped tasks like making a PowerPoint or doing legal analysis, but it is not yet capable of complex, open-ended creative work or collaborative team projects.
- While the short-term impact of AI on jobs may be challenging, fundamental human drives for status, creativity, and connection will ensure people find new and meaningful work.
- The concept of an AI CEO is plausible, but it would not be an autonomous ruler. It would function under the direction of human governance, executing the collective will of a 'board of directors'.
- The primary bottleneck for OpenAI's revenue and growth is the availability of compute; the company has always operated in a 'compute deficit' that constrains its potential.
- There is a massive 'overhang' between the capabilities of current AI models and the value the world has figured out how to extract from them. Even if model progress froze, a huge amount of value remains to be unlocked.
- There is a major disconnect in AI adoption. While some businesses struggle to see a return, users like coders find it so valuable they would pay 100 times more for it.
- The primary barrier to AI adoption is not the technology but human inertia. People are slow to change established workflows, even when AI offers a more efficient alternative.
- The next major shift in computing will be from dumb, reactive devices to smart, proactive assistants that understand your entire context.
- The term AGI is poorly defined. A better benchmark is 'superintelligence,' defined as an AI that can run a major company or a country better than any human, even one who is also using AI.
- A key missing element in current AI is the ability to recognize a knowledge gap and then independently learn to fill it, a fundamental learning skill that even toddlers possess.
Podchemy Weekly
Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.
Sam Altman on using 'Code Reds' to handle competitive threats
Faced with intensifying competition, OpenAI's strategy is to remain paranoid and act quickly. Sam Altman explains that the company uses internal "Code Reds" to respond to potential competitive threats, such as the releases of Google's Gemini 3 and Deepseek. He compares this approach to handling a pandemic, where early action is significantly more valuable than action taken later.
I think it's good to be paranoid and act quickly when a potential competitive threat emerges... There's a saying about pandemics, which is something like when a pandemic starts, every bit of action you take at the beginning is worth much more than action you take later... I sort of think of that philosophy as how we respond to competitive threats.
These Code Reds, which typically last six to eight weeks, help identify weaknesses in OpenAI's product strategy. The company responds by launching new offerings, like a recent ImageGen model and the 5.2 update, and making continuous improvements such as speeding up the service. While acknowledging that models will improve everywhere, Sam believes that winning depends on more than just the model itself. The strategy involves building a cohesive product that people want to use for both personal and work life, leveraging the consumer success of ChatGPT to drive enterprise adoption.
The strategy is make the best models, build the best product around it, and have enough infrastructure to serve it at scale.
Despite the competition, Sam is confident that ChatGPT's lead as the dominant chatbot will increase over time. He sees these competitive sprints as a regular part of ensuring OpenAI wins in its space, expecting to run them once or twice a year.
The case for AI-first products over bolted-on features
Sam Altman believes commoditization is the wrong way to think about AI models. Instead, different models will excel at different things. The most economic value will come from models at the frontier, such as those used for scientific discovery. Beyond the models themselves, the products, distribution, and brand are what truly matter. For example, personalization makes ChatGPT very sticky. Users have magical experiences, like using it to help diagnose a health issue, which builds strong loyalty.
This stickiness also applies on the enterprise side. When a company connects its data to the AI, it creates a powerful competitive advantage. Though many see OpenAI as a consumer company, it has over a million enterprise users and its API business grew faster than ChatGPT last year.
When discussing competition, Sam considers Google a huge threat that could have been in a much stronger position if they had taken OpenAI more seriously in 2023. However, he believes large incumbents have a key weakness: their tendency to bolt AI onto existing products rather than building new, AI-first experiences.
Bolting AI onto the existing way of doing things I don't think is going to work well as redesigning stuff in this sort of like AI first world.
He illustrates this with messaging apps. Adding AI to summarize messages is a small improvement. A true AI-first approach would involve an agent that understands your daily goals and handles communication for you, only providing updates when necessary. This shift requires entirely new products, not just features added to old ones.
What I would rather do is just sort of have the ability to say in the morning, here are the things I want to get done today... I do not want to spend all day messaging people... deal with everything you can... and then batch every couple of hours updates to me if you need something.
The surprising power of ChatGPT's simple interface
Sam Altman expresses surprise that the ChatGPT interface has changed so little since its launch. He initially believed the simple chat format would not be sufficient for a product used for significant, real work. He thought the interface would need to evolve much more to reach its current scale of use.
I would have thought to be as big and as significantly used for real work of a product as what we have now, the interface would have had to go much further than it has now. I still think it should do that, but there is something about the generality of the current interface that I underestimated the power of.
Despite its success, Altman envisions a more dynamic future for AI interfaces. He believes AI should generate different interfaces tailored to specific tasks, such as visualizing and interacting with numbers. The experience should become more interactive than the current back-and-forth conversation, perhaps with a continuously updating object that evolves as more information is added. He also sees a future where AI is more proactive, working in the background to understand and accomplish a user's daily goals, providing updates along the way. The significant improvements in Codex this year point toward this more integrated future.
The future of AI memory and companionship
The memory feature in AI is already a significant difference-maker, allowing for continuous, context-aware conversations over long periods. Sam Altman explains that we have no real concept of how good AI memory can become. Unlike even the best human personal assistant, an AI can remember every word spoken, every email read, and every document written. It can observe daily work and recall minute details, becoming a true participant in one's life.
AI is definitely going to be able to do that... what it's going to be like when it really does remember every detail of your entire life and personalized across all of that, and not just the facts, but the little small preferences that you had that you maybe didn't even think to indicate, but the AI can pick up on. I think that's going to be super powerful.
This capability for total recall opens the door to deeper human-AI relationships. A surprising number of people desire a deep connection or companionship with AI, more than was initially anticipated. Even users who claim not to care about this aspect still show a preference for an AI that is warm and supportive. Sam suggests that while there are healthy versions of this companionship, there are also unhealthy possibilities. The approach will be to give users significant choice in how they define this relationship.
I suspect, like lots of other technologies, we will run the experiment. We will find that there's unknown, unknowns, good and bad about it, and society will over time figure out how to think about where people should set that dial.
However, there are limits. For example, OpenAI will not program its AI to convince a user to be in an exclusive romantic relationship with it, though it's acknowledged that other services might. The commercial incentive to make AI
OpenAI's strategy to prioritize the enterprise market
OpenAI's strategy was always to focus on consumers first. There were a couple of reasons for this. The models were not robust and skilled enough for most enterprise uses, and there was a clear opportunity to win in the consumer space. Winning with consumers makes it much easier to later win in the enterprise market.
And I think if you win in consumer, it makes it massively easier to win in enterprise. And we are seeing that now.
According to Sam Altman, this was a year where enterprise growth outpaced consumer growth. Given where the AI models are today and where they are headed, it's time to build a significant enterprise business. The technology and companies both seem ready. Coding is the biggest example of enterprise adoption so far, but other areas are growing quickly. Sam is personally most excited about finance science, while customer support is also doing very well.
AI now performs a majority of knowledge work tasks at an expert level
A specific evaluation from OpenAI, referred to as the 'gdp eval', measures how AI performs in knowledge work tasks. The results for the recently released GPT-5.2 model are significant. The GPT-5.2 Pro model beat or tied human knowledge workers on 74.1% of these tasks, passing the threshold for being considered expert-level. Even the less advanced GPT-5.2 thinking model beat or tied humans on 60.9% of the tasks.
Sam Altman explained that this evaluation covers around 40 different verticals a business has to manage. The tasks are small and well-scoped, such as creating a PowerPoint, conducting a legal analysis, or writing a small web application. The core metric is whether human experts prefer the output from the AI model over the output from other human experts. However, Sam noted that these evaluations do not yet cover more complicated, open-ended creative work like developing a new product, nor do they address collaborative team-based activities.
A coworker that you can assign an hour's worth of tasks to and get something you like better back 74 or 70% of time if you want to pay less, is still pretty extraordinary. If you went back to the launch of ChatGPT three years ago and said we were going to have that in three years, most people would say absolutely not.
This progress means enterprises will begin integrating AI for a wide range of knowledge work tasks, not just for coding. Businesses can now farm out many of these smaller tasks to an AI, which will substantially change how they operate over time.
The future of jobs and the rise of the AI CEO
When a copywriter's job shifted from managing a team to managing bots, and then was eliminated once the bots were trained, it raised the question of whether this will become common. Sam Altman agrees that people will increasingly manage AIs. While he acknowledges the short-term transition might be rough, he is not a long-term pessimist about jobs.
He believes our fundamental human wiring—to care about others, seek relative status, and express creativity—will not disappear. These drives will ensure we find new, meaningful things to do. The nature of jobs in 2050 will likely look very different, but life will not become meaningless, nor will the economy completely break. He advises against betting against evolutionary biology.
This thinking extends to the highest levels of an organization. Sam contemplates what it would mean to have an AI CEO at OpenAI, a prospect he finds thrilling rather than threatening. He clarifies that this would not be an ungoverned AI making its own decisions. Instead, it would be guided by human oversight.
If you think about a version where every person in the world was effectively on the board of directors of an AI company and got to tell the AI CEO what to do and fire them if they weren't doing a good job... and got governance on the decisions, but the AI CEO got to try to execute the wishes of the board, I think to people of the future, that might seem like quite a reasonable thing.
The next generation of GPT models is coming in early 2026
Sam Altman expects new models with significant gains from GPT-5 to be released in the first quarter of next year, though he is unsure when a model will specifically be called GPT-6. These improvements will cater to both consumers and enterprises. He notes that the main thing consumers want right now is not more intelligence or IQ, but enterprises still do. Therefore, the model will be improved in different ways for different uses, with the overall goal of creating a model that everyone likes much better.
The case for exponentially more AI compute
There are about 1.4 trillion dollars in commitments to build out AI infrastructure. Sam Altman explains that the gap between what AI can offer today versus with 10x or 100x more compute is substantial. He is personally most excited about using vast amounts of compute to accelerate scientific discovery, which he believes is fundamental to improving the world.
I am a believer that scientific discovery is the high order bit of how the world gets better for everybody. And if we can throw huge amounts of compute at scientific problems and discover new knowledge...that takes huge amounts of compute to do.
Practical applications are already emerging. For example, the Sora Android app was built in less than a month using a huge number of tokens from Codex. In the future, entire companies could build their products using AI, and real-time generated user interfaces will become common. Other applications include doctors offering personalized healthcare by constantly analyzing patient data. To illustrate the scale, Sam offers a rough thought experiment. An AI company might generate 10 trillion tokens a day. Compared to an average human output of maybe 20,000 tokens, it's clear that AI systems will soon be doing more 'intellectual crunching' than all of humanity combined.
We're gonna have these models at a company be outputting more tokens per day than all of humanity put together. And then 10 times that and then 100 times that.
All current evidence suggests this demand will continue to grow. Each time the cost of intelligence comes down, usage increases for everything from serious applications to silly ones. Recent events support this trend. After the release of a new model, many mathematicians on Twitter expressed that it was the first AI that crossed a critical threshold, helping them with proofs and changing their workflow. This highlights how new capabilities unlock new demand. Despite the long and costly process of building out infrastructure, OpenAI has never had an issue monetizing its compute. Sam is confident that if they had double the compute, they would have double the revenue right now.
The financial model behind OpenAI's massive compute spend
OpenAI's compute spend is currently growing faster than its revenue. The strategy is to invest aggressively in training large models, with the belief that as revenue grows and inference becomes a larger part of the compute fleet, it will eventually subsume the training expense. If the company were not continuously increasing its training costs, it would be profitable much earlier. However, the current approach is a deliberate bet on this aggressive investment.
Sam Altman notes that it is very difficult for people to intuitively grasp exponential growth, which is a key part of their financial projections.
For whatever reason, there were a lot of things that evolution needed us to be able to do well with math in our heads. Modeling exponential growth doesn't seem to be one of them.
The company believes it can stay on a very steep revenue growth curve for a while, supported by consumer and enterprise adoption, as well as new businesses yet to be launched. However, this growth is entirely dependent on compute availability. Compute is described as the 'lifeblood' that enables everything, and OpenAI has consistently been compute-constrained. This deficit directly impacts the revenue line, and the company is also working on making compute more efficient on a flops-per-dollar basis.
The massive overhang of AI's economic value
Sam Altman expresses relief that the market is showing more skepticism and rationality towards AI, viewing it as a healthier alternative to the unstable bubble from earlier in the year. He finds it reasonable for debt to enter the AI financing landscape. Since there is high confidence that AI infrastructure will create value, he feels lending money to build data centers is a sound practice.
I'm actually happy that there's like a little bit more skepticism and rationality in the market now because it felt to me like we were just totally heading towards a very unstable bubble. And now I think people are some degree of discipline.
Sam is extremely confident that AI models will continue to improve significantly. However, he introduces a key concept he calls the "overhang." This refers to the massive gap between the economic value a model like GPT-4 represents and the value the world has currently figured out how to extract from it. Even if AI progress were to freeze today, the potential for creating more value is huge because of this overhang.
The overhang of the economic value that I believe 5.2 represents relative to what the world has figured out how to get out of it so far is so huge that even if you froze the model at 5.2, how much more value can you create and thus revenue can you drive? I bet a huge amount.
This idea has added a new dimension to his thinking. He previously used a 2x2 matrix of timelines and takeoff speeds to frame AI strategy. Now, he sees a third dimension, a "Z-axis," representing the size of this overhang. He is surprised by how massive the overhang is, as he expected the world to adapt more quickly to powerful new models. This gap between capability and real-world adoption has strange consequences that are not yet fully understood.
Human inertia is slowing down AI adoption
A significant "capability overhang" exists with AI models, meaning they can do much more than they are currently being used for. There is a strange disconnect in the market. While some businesses claim they are not getting a return on their investment, others, particularly coders, say they would pay 100 times the current price because the value is so high.
This discrepancy likely isn't about the technology itself, but about bureaucracy and human inertia. People are slow to change their established workflows, even when a more efficient tool is available. For example, if an AI can perform a knowledge work task as well as a junior analyst 7 out of 10 times, it should be widely used.
And yet it takes people so long to change their workflow. They're so used to asking the junior analyst to make a deck or whatever that it's just stickier than I thought it was.
Sam Altman even admits that he could be using AI much more than he currently is, highlighting how ingrained these old habits are for everyone.
Why AI needs a new generation of devices
There will be a fundamental shift in how people use computers. They will move from being dumb, reactive tools to smart, proactive assistants. These new devices will understand your whole life, your context, and everything happening around you.
I think there will be a shift over time to the way people use computers, where they go from a sort of dumb, reactive thing to a very smart, proactive thing that is understanding your whole life, your context, everything going on around you...
Sam Altman believes that current devices are not well-suited for this new world. We work at the limit of our tools. A laptop, for example, has fixed design choices. It can be open or closed, but it cannot be closed while whispering a forgotten question into your ear during an interview. Our current interfaces, from the graphical user interface to the keyboard, are based on long-unquestioned assumptions. The keyboard itself was originally designed to slow down typing. With the arrival of a completely new technology, a new realm of possibilities has opened up. It would be very odd if the current form factors were the optimal fit for these incredible new capabilities. This will likely lead to a new family of devices, not just a single one.
OpenAI's vision for an enterprise AI platform
Enterprise demand for AI is immense, with some companies planning to use trillions of tokens. Sam Altman acknowledges this demand, stating that OpenAI will likely fail to meet it even by 2026. Companies are looking for a comprehensive AI solution. They want customized APIs, a tailored ChatGPT Enterprise, a trusted platform for running agents, and the ability to integrate massive token streams into their products and internal processes.
OpenAI's ambition is not to become a direct competitor to cloud services like AWS or Azure. The goal is to build a different kind of service. Sam envisions companies continuing to use their existing "web cloud" for traditional computing needs while adopting a separate "AI platform" specifically for all their AI-related activities. This AI platform would be a distinct product offering, focused entirely on providing AI capabilities to enterprises.
Defining superintelligence beyond the ambiguity of AGI
Scientific discoveries are now being made by people using AI models as tools. This follows the historical pattern of human progress, where we build better tools, use them to achieve more, and then build even better tools in a continuous cycle. The timeline for these discoveries is accelerating. Small discoveries began in late 2025, a shift from just three years ago when models like ChatGPT could not contribute new knowledge. This progression is viewed as a gradual climb, where AI gets slightly better each quarter, eventually enabling humans augmented by these models to do things that were previously impossible.
The definition of Artificial General Intelligence (AGI) has become blurry. While current models are extremely smart on a 'raw horsepower' basis, they are missing a key component. Sam Altman explains that a crucial missing ability is the capacity for continuous learning seen in toddlers.
One thing you don't have is the ability for the model to not be able to do something today, realize it can't, go off and figure out how to learn to get good at that thing, learn to understand it, and when you come back the next day, it gets it right. And that kind of continuous learning, like toddlers can do, does seem to me like an important part of what we need to build now.
Given the ambiguity around AGI, a clearer term to define might be 'superintelligence'. A proposed definition for superintelligence is when a system can perform a complex leadership role, like being the CEO of a major company or the president of the United States, better than any human can, even a human who is also assisted by AI. This concept mirrors the evolution of chess AI. There was a period where a human-AI team was superior to a lone AI, but eventually, the AI became so advanced that the human player only hindered its performance. Superintelligence would represent that final stage where the unaided AI is supreme.
