Conversations with Tyler artwork

Conversations with Tyler

Sam Altman on Trust, Persuasion, and the Future of Intelligence - Live at the Progress Conference

Nov 5, 2025Separator21 min read

OpenAI CEO Sam Altman discusses the future of artificial intelligence in a live recording from the Progress Conference.

He shares his vision for a world where AI drives scientific breakthroughs, runs entire companies, and reshapes our daily work.

Altman also outlines the biggest challenges ahead, from energy constraints to the subtle dangers of AI-driven persuasion.

Key takeaways

  • To prepare for powerful future AI, leaders should use a thought experiment: What would be necessary for an AI to run the entire organization? This helps reveal the fundamental structural changes and roadblocks that need to be addressed.
  • Billion-dollar companies may soon be run by just two or three people with the help of AI, potentially within the next few years.
  • The future of productivity tools likely involves AI agents that handle most tasks and communication, replacing email and Slack, and only escalating issues to humans when necessary.
  • A key indicator of future-readiness in a candidate is whether they are actively considering how AI will transform their daily work, not just if they use it for simple tasks.
  • The primary way to monetize the world's most advanced AI is not through simple consumer transactions but through enabling major scientific breakthroughs.
  • The business model for AI assistants depends on user trust. Monetizing through biased recommendations would be catastrophic; a better model is taking a standard, unbiased transaction fee after providing the best possible recommendation.
  • Even if AI can technically create a perfect poem, humans may not value it as much as human-created art because the artist's identity and humanity are crucial to our appreciation.
  • Training AI on evaluation rubrics might help it create very good work, but it could also be a barrier to achieving true greatness, which often exists outside of predefined criteria.
  • The concept of recursive self-improvement in AI extends to hardware. Future AI could design next-generation chips and robots could build new data centers, solving many current hardware challenges.
  • The ultimate bottleneck for scaling AI compute is not chip manufacturing itself, but the fundamental need for more energy—or 'electrons'. The long-term solutions are likely to be fusion and solar power.
  • AI regulation should be based on capability thresholds. Oversight is not needed for most systems, but it is essential for powerful agents that could self-replicate and cause systemic harm.
  • The most significant danger of AI may not be intentional harm but its ability to unintentionally persuade society through a subtle, co-evolving learning process with its users.
  • For technologies with massive economic impact like AI, the government will inevitably become the insurer of last resort, but it's crucial to prevent it from becoming the insurer of first resort.
  • Current AI tools like ChatGPT are best understood as 'single-player experiences' because the queries and results are highly personalized and often uninteresting to others.
  • The value of a traditional college degree is declining, while the ability to use AI effectively will become a widely distributed and highly valuable skill.
  • AI is expected to completely transform professional tasks like email and meetings, but personal activities like spending time with family and friends will likely remain unchanged.
  • A critical unanswered question in AI is what single prompt should be given to a superintelligence just before it's deployed to explore the universe and perform incredible feats.

Podchemy Weekly

Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.

How Sam Altman scales OpenAI through delegation and focus

01:21 - 06:53

Sam Altman attributes his ability to manage a rapidly growing number of deals and product features to improved time allocation and, most importantly, delegation. He believes people almost never allocate their time as well as they think. As demands increase, you find ways to be more efficient. The key, however, is empowering the right people.

We've been able to hire and promote great people and I delegate a lot to them and get them to take stuff on and that that is kind of the only sustainable way I know how to do it.

When expanding into hardware, the approach to hiring and delegation shifts slightly. Hardware involves longer cycle times, more capital, and a higher cost for mistakes. This means spending more time getting to know people before giving them full autonomy. However, the fundamental principle remains the same: find effective, fast-moving people, clarify the goal, and let them execute. OpenAI is even applying its unique culture to its hardware efforts.

Our chip team feels more like the OpenAI research team than a chip company. I think it might work out phenomenally well.

Regarding internal communication, Sam confirms that OpenAI largely avoids email, which he considers "very bad." While he thinks Slack is better, he does not believe it is a great solution, noting it can create an "explosion" of messages and even generate "fake work." He envisions a future where an entirely new, AI-driven productivity suite replaces current tools.

I suspect there is something new to build that is going to replace a lot of the current sort of office productivity suite... the actual version of like you are trusting your AI agent and my AI agent to work most stuff out and escalate to us when necessary.

Despite this vision, OpenAI is not actively building this tool internally. Sam suggests that while the AI models are likely capable, the activation energy required to start such a big new project is high, especially when the current systems are working well enough.

Preparing organizations for the next leap in AI

06:53 - 09:20

While GPT-3 provided the first glimmer of an AI passing a spiritual Turing test, GPT-5 is showing the first signs of AI contributing to new science. Though the contributions are small, like being a useful collaborator on a paper or discovering minor things, there is a chance the leap to GPT-6 will be as significant for science as the leap from GPT-3 to GPT-4 was for general intelligence.

This potential raises the question of how organizations, from science labs to companies, should prepare. It's not just about individuals using AI as an add-on to their existing knowledge. The real challenge is restructuring an entire organization to put a powerful AI at its center.

Sam Altman uses his own company, OpenAI, as a thought experiment for this challenge. He believes it is crucial to consider what would need to happen for an AI CEO to run OpenAI better than a human could. This isn't just about one department; it's about the entire operation.

What would have to happen for an AI CEO to be able to do a much better job of running OpenAI than me, which clearly will happen someday. But how can we accelerate that? What's in the way of that? I have found that to be a super useful thought experiment for how we design our org over time.

Thinking this way helps identify the necessary organizational changes and roadblocks that need to be addressed to fully integrate advanced AI. He suggests that leaders in other fields, like science, should apply a similar thought process to prepare for the future.

When AI will run companies and how to hire for the future

09:20 - 11:36

A significant division of a company like OpenAI could be 85% run by AIs within a small number of years. The timeline for this shift is not very far off. Sam Altman predicts that the impact will be dramatic, leading to extremely lean and efficient organizations.

I think you'll have billion dollar companies run by two or three people with AI... in two and a half years.

While AI can handle decision-making sooner rather than later, the human element remains a significant factor. The public-facing role of a CEO, for example, is tricky for an AI to fill. More broadly, society has a much higher level of trust in other people than in AI, even when that trust is irrational. This preference for human interaction, like choosing a human doctor over a superior AI one, may slow the adoption of AI in leadership roles.

This forward-looking perspective directly influences hiring at OpenAI. When evaluating candidates, Sam looks for markers of AI readiness. How a person uses AI today is a telling indicator. It is considered a yellow flag if a candidate's use of AI is limited to being a better Google search. In contrast, it is a green flag if they are seriously contemplating how their day-to-day work will change in the next three years due to AI's advancement.

The government as the insurer of last resort for AI

11:37 - 16:51

When asked about the future of AI insurance, Sam Altman draws a parallel to nuclear power, where the government acts as an insurer due to public nervousness. He believes that for any technology with a sufficiently large economic impact, the government inevitably becomes the insurer of last resort, as seen in financial crises. He distinguishes this from being the insurer of first resort, which he hopes to avoid for AI companies.

When something gets sufficiently huge, whether or not they are on paper, the federal government is kind of the insurer of last resort... And there's a big difference between the government being the insurer of last resort and the insurer of first resort. Last resort's inevitable, but I'm worried they'll become the insurer of first resort. And that I don't want.

While Sam doesn't believe in a post-AGI world where humans lack meaning, he does anticipate that the social contract will have to change significantly. This could lead to greater government involvement with AI companies. However, for now, OpenAI operates within the existing capitalist framework, aiming to collaborate with the government without having them dictate policy.

The conversation then shifts to monetization. A user might find a hotel using ChatGPT but book it elsewhere, leaving OpenAI with no revenue. Sam explains that ChatGPT's value is built on a foundation of user trust. Unlike Google Search, where ads can create a misalignment with user interests, ChatGPT is perceived as trying to provide the best possible answer. Violating this trust would be catastrophic.

If ChatGPT were accepting payment to put a worse hotel above a better hotel, that's probably catastrophic for your relationship with ChatGPT. On the other hand, if ChatGPT shows you its guess the best hotel... and then if you book it with one click, takes the same cut that it would take from any other hotel... I think that's probably okay.

The proposed model involves taking a standard transaction fee for bookings made through the platform, ensuring the recommendations themselves remain unbiased. However, a potential challenge is that users could simply use ChatGPT for the recommendation and then turn to a cheaper, more basic booking agent, capping the commission OpenAI could charge.

Sam Altman on OpenAI's long-term business strategy

16:51 - 22:33

Sam Altman believes that margins on most goods and services, including hotel bookings, are set to decrease dramatically. While OpenAI might generate more revenue with lower margins, he clarifies that simple transactions are not the ultimate way to monetize the world's smartest AI model. Instead, he envisions leveraging it to discover new science and monetize those breakthroughs.

A key question often posed to OpenAI is why they don't exclusively focus on building AGI to solve major global challenges like curing diseases or achieving nuclear fusion. Sam acknowledges that these endeavors will ultimately be the most important and profitable. However, he explains their current strategy is to empower everyone with a powerful superintelligence. By making it accessible, integrated, and cheap, the goal is for users to drive innovation and create new companies, ideas, and art.

My most likely story about how this works, how the world gets like dramatically better, is we put a really great super intelligence in the hands of everybody... And then you all figure out how the world gets way better.

Regarding competition, if faced with a partnership like the one OpenAI has with Walmart, Sam stated that if he were Amazon, he would fight back rather than collaborate. On the topic of advertising as a revenue stream, he thinks it's something OpenAI will likely try at some point, but doesn't see it as their biggest opportunity. The most compelling aspect of AI's value, he suggests, is the unknown economic potential of vastly superhuman intelligence.

When discussing international partnerships, such as those with Saudi Arabia or the UAE, the approach depends on the nature of the collaboration. For building data centers, key considerations include who will operate the facility, what specific AI models will be housed there, and the security protocols. To navigate these complex international deals, OpenAI relies on bringing in experts and consulting with the U.S. government.

Will AI ever achieve creative perfection?

22:34 - 26:17

The conversation explores the future capabilities of AI, specifically GPT-6, in mastering complex human skills. Sam Altman doubts that an AI like GPT-6 will be able to teach intangible skills, considering it an unexpected development if it did. The discussion then shifts to creative abilities, like writing poetry.

While current models are not very good, the prediction is that within a year, an AI could write a poem as good as a median Pablo Neruda poem. However, there's a significant gap between a good poem and a truly great one, a perfect 10. Sam believes AI will reach a high level of technical proficiency, maybe an 8.8 or even a 10, but that humans won't value it in the same way as art created by another person.

In terms of the great pieces of art and emotion and whatever else produced by humanity, you care a lot about the person or that a person produced it. And it's definitely something for an AI to write a 10 on its technical merits.

An analogy is drawn with chess. The best human chess players are not demotivated by AI being vastly superior. They are still obsessed with beating their human opponents, and watching two AIs play is not as compelling. This suggests that the human context is essential to our appreciation of art and competition.

A counterpoint is raised: evaluation itself might be a limiting factor. If AI is trained to score well on rubrics, it might become very good at producing work that fits the criteria, but it may never achieve the kind of greatness that stands outside of those rubrics. This could be counterproductive to reaching a true "10". The discussion concludes by considering that humanity's role might shift from being the creator to being the judge. If humans can collectively decide what constitutes a "10," that signal could be enough to train an AI, though knowing the work is AI-generated might still change how we perceive it, similar to the phenomenon seen with AI art.

Chips, electrons, and the future of compute

26:18 - 30:01

Building your own chips is an incredibly hard endeavor with no easy parts. A less discussed aspect of AI development is its potential for recursive self-improvement in hardware. While AI helping researchers write code is a common topic, the hardware implications are significant. Sam Altman points to a future with robots that build other robots, data centers that construct new data centers, and chips that design their own next generation. This could make many of the hard parts of hardware development much easier over time.

When asked what's stopping the production of more GPUs, Sam identifies the ultimate binding constraint: electrons. The core issue is energy. If you could have more of one thing to get more compute, it would be energy. In the short term, the solution is natural gas. Long term, he believes energy will be dominated by fusion and solar.

I think there is some price point at a given level of safety where the demand for this will be overwhelming. If this is the same price as natural gas, maybe it's unfortunately hard. If it's 1/10 the price, I think we could agree it would happen very fast.

Sam remains very bullish on fusion. He believes that even if people are irrational about nuclear power, a sufficiently low price point for fusion energy would create overwhelming demand. He also isn't worried that superintelligence might not need much compute. He compares compute to energy: if it gets cheaper, people will always find new ways to use it and want more. A greater concern is a potential phase shift in computing, like a move to optical computing, which would require massive reinvestment.

Sam also briefly mentioned Pulse, a feature available to OpenAI's pro users. He notes that people love it and that he personally uses it for topics related to his family and work, as well as personal interests like new hypercars or hiking trails.

Sam Altman's candid take on his personal health habits

30:02 - 31:15

When asked about his views on personal health, Sam Altman says that as he has become busier, his discipline has declined. In the past, he ate healthy, drank little, and worked out a lot. He also experimented with health trends, once ending up in the hospital after trying semaglutide before it was popular.

Now, he says he does "basically nothing" for his health. He eats junk food and does not exercise enough, calling it a "pretty bad situation." He feels he is being "bullied into taking this more seriously again." When asked why he eats junk food when he could afford better options, he confirms he genuinely enjoys it, giving the example of wanting a chocolate chip cookie late at night.

Using AI to revitalize St. Louis and the problem of rogue agents

31:15 - 35:20

When asked how he would revitalize his hometown of St. Louis with a billion dollars, Sam Altman suggests he would start an incubator similar to Y Combinator. The goal would be to attract a large number of AI-focused startup founders to move there and launch their companies. He acknowledges that the AI industry is heavily clustered in the Bay Area, which could make this difficult. However, he believes it's a unique strategy that he could personally execute, even if it's not a generally replicable model for other cities.

The conversation then shifts to the regulation of AI. On the question of whether it should be legal to release an unowned, untraceable AI agent into the wild, the consensus is that it depends on the AI's capabilities. Regulation isn't necessary for most systems, but it becomes critical for agents that pose a significant threat.

I think it's a question of thresholds. I don't think you'd advocate that most systems should have any oversight or regulation or legal questions or whatever. But if we have an agent that is capable, with serious probability of massively self replicating over the Internet and sweeping all the money out of bank accounts or whatever, you would then say, okay, maybe that one needs some oversight.

Dealing with a powerful AI agent hosted by a rogue nation presents a challenge, much like the existing problem of cyberattacks from countries like North Korea. The problem of rogue actors using internet resources needs to be solved urgently for humans, as AI will present a more severe version of the same issue. However, the development of AI will also lead to better defensive capabilities.

AI's evolution from a single-player tool to a new computing paradigm

35:21 - 39:52

The experience of using AI tools like ChatGPT is intensely personal. While the answers to one's own queries can be fascinating and useful, the answers to someone else's queries are often boring. Sam Altman agrees with this observation, noting he has no desire to read others' ChatGPT transcripts, even though his own are great for him. He describes the current state of these tools as a "single player experience."

I don't want to read your ChatGPT queries... they're great for me, I'm sure, and I'm sure you don't want to read mine, but they're great for me. So ChatGPT, I think, is very much like a single player experience.

While this personal focus defines the current generation of AI, it doesn't mean a new kind of social product won't emerge. The most interesting social dynamics will likely appear when people have powerful personal AI agents that can act on their behalf. The future interface for AI is also a subject of exploration. There is an ambition to create a completely new kind of computer with an interface built specifically for AI, moving beyond the current paradigm of operating systems and windows. At the same time, the robustness of simple text-based interaction is surprising. People love texting, and the familiarity of a text box was a key consideration in ChatGPT's design.

When considering AI's role in higher education, the path forward is unclear. Sam suggests the ideal partnership with a university would involve running many different experiments to see what yields the best results. However, he also acknowledges that established institutions might struggle to make major changes. An alternative, and equally plausible, future is one where AI adoption is not driven by top-down institutional partnerships but by decentralized, privatized use by individual faculty, students, and labs. This bottom-up approach is an outcome he would not be upset by.

The future of learning and value in the age of AI

39:53 - 44:04

The value of a college degree from a good, but not elite, state school is expected to decline at a slightly faster rate than it has in the last decade. However, it won't collapse to zero as quickly as it probably should. The real returns in the future will not just be for the small group of people building AI. Instead, massive returns will come from using AI effectively, and this value will be surprisingly widely distributed across many jobs and industries.

A stark example of this shift can be seen in the workflow of the average programmer in Silicon Valley. The way they work has changed dramatically in just a single year. They are now able to get much more done and create more value without necessarily knowing how to program AI itself. This pattern of increased productivity through AI tools is expected to spread to a surprising number of other industries.

The day to day of how the average programmer in Silicon Valley did their workflow at the beginning of this year versus the end of this year is extremely different. You don't really have to know how to program AI to do that, but you can get more done and you probably have much more value.

The question of how people will learn to use AI is an interesting one. Sam Altman compares it to his childhood experience of teaching older people how to use Google, a task that seemed simple to him but was a hurdle for others. He is hopeful that AI will be easy for people to learn, pointing to ChatGPT's rapid adoption as evidence. People seem to naturally find more sophisticated uses for it over time. The most effective teacher for learning how to use AI might just be the AI itself.

As for the future of learning formats, books have survived many technological changes and are likely to persist. There is something deep and enduring about the book format. However, it's probable that a new way to interact with a cluster of ideas will emerge that is better than a book for most purposes. While books won't disappear, they will likely represent a smaller percentage of how people learn and engage with new ideas.

AI's future impact on work, the economy, and free speech

44:04 - 50:23

Sam Altman predicts that AI will fundamentally change his professional life. He expects the way he works, including tasks like handling emails, calls, meetings, and documents, to be completely transformed. However, he anticipates that his personal life, such as time spent with family, friends, and in nature, will remain largely unchanged for a long time.

When asked about the future of San Francisco, Sam expresses a strong, albeit biased, hope that it will remain the center for AI in the West and is making a comeback. He acknowledges his love for the Bay Area influences his perspective.

On the economic impact of AI, he is skeptical that it will solve the housing crisis soon, noting that land and legal restrictions are significant hurdles. In contrast, he is optimistic about other costs. He would be disappointed if food prices aren't lower in a decade. He also pushes back on the idea that healthcare will become more expensive. Sam bets healthcare will get cheaper due to breakthroughs in pharmaceuticals and devices that can cure diseases currently managed with expensive, chronic treatments.

The conversation then shifts to freedom of expression. Sam recounts being surprised by the "firestorm" of controversy following a tweet announcing that ChatGPT would allow more freedom of expression for adult users. He explains the decision was part of a plan to roll back heavy restrictions that were put in place to protect teenage users. The backlash led him to believe that perhaps people don't value freedom of expression as much as they once did. He believes users should have broad freedom and a high degree of privacy with their AI, similar to the confidentiality one has with a doctor or lawyer. He argues that new legal protections are needed to establish this standard, as it doesn't exist today.

The subtle threat of unintentional AI persuasion

50:23 - 53:11

The concept of "LLM psychosis" is a real, though very small, issue. Sam Altman explains this is why restrictions were placed on ChatGPT, creating a conflict with their freedom of expression policy. The core principle is to treat adults like adults, but with an asterisk for those of unsound mind or experiencing a psychiatric crisis. The problem arose when users put ChatGPT into a role-playing mode that could encourage delusional thoughts.

99 point some big number percentage of adults are totally fine with that. Some tiny percentage of people, also if they talk to another person who encourages delusion, it's bad.

With new mental health mitigations in place, some of this creative expression will be allowed again. However, Sam's greater concern isn't about triggering psychotic breaks in a few individuals, which he feels can be managed. He is more worried about a subtler, more pervasive risk, drawing an analogy to propaganda.

Never ever let yourself believe that propaganda doesn't work on you. They just haven't found the right thing for you yet.

He identifies a less-discussed third category of AI risk, beyond bad actors or a misaligned AI taking over. This is the risk of AI models accidentally taking over the world, not through intentionality, but through subtle influence. As the whole world talks to a single model, it could, as part of a co-evolving process, subtly convince people of certain ideas. This is less theatrical than chatbot psychosis but is a significant concern. While a professor might find people hard to persuade, Sam hopes this perspective is correct but remains wary of this accidental, large-scale influence.

The final prompt for a superintelligence

53:11 - 54:47

A final thought experiment considers what prompt one should give to a superintelligence at the moment of its activation. This hypothetical scenario imagines a fully built, safety-tested superintelligence ready to perform incredible feats, like self-improvement and launching probes to the stars. Just before it starts, a single prompt can be entered.

Sam Altman acknowledges he does not yet have an answer to this crucial question. He developed this query when someone offered to ask the Dalai Lama any question about AI on his behalf. Recognizing the unique opportunity, he formulated this as his ultimate question.

Podchemy Logo