Win-Win with Liv Boeree artwork

Win-Win with Liv Boeree

#51 - Tyler Cowen - Markets, Institutions and Why AI Hype is (Probably) Overblown

Jan 12, 2026Separator37 min read
Official episode page

Economist Tyler Cowen, Liv Boeree, and Igor Kurganov examine why human institutions and social habits are the primary bottlenecks for the progress of artificial intelligence.

They analyze how AI impacts global growth, healthcare, and security while arguing that human behavior matters more than the technology itself.

Their conversation clarifies the realistic risks of new tools and explains why societal change often moves slower than technical innovation.

Key takeaways

  • Addiction is an arms race where new habits replace old ones. While we may solve one societal addiction, technology and policy changes often create new, more accessible ones like sports betting.
  • The most realistic goal in managing addiction is to steer individuals toward the least harmful versions, such as benign habits like drinking mineral water or playing chess.
  • Social pressure and changes in public norms are often more powerful tools for breaking societal addictions than education alone.
  • We should discount social welfare at a zero rate to value the distant future as much as the present.
  • Innovation is more likely to solve climate change than regulation because making green energy cheaper creates natural demand without relying on unpopular taxes.
  • The primary challenge of solar geoengineering is not technical feasibility but the geopolitical risk of nations having conflicting climate goals.
  • High drug prices in the US drive global innovation. Price discrimination is unfair, but it ensures that the wealthiest markets fund the development of life-saving cures for everyone.
  • Regulatory constraints act as the slowest stage in a chemical reaction. As AI speeds up simulation, these human and legal bottlenecks become even more restrictive to overall progress.
  • In an AI world, value shifts toward human connection and charisma. People still prefer to consume podcasts and mentorship from real individuals rather than AI simulations.
  • Universal Basic Income may be unnecessary if superintelligence lowers the cost of living by making essentials like healthcare and housing significantly cheaper.
  • Technological diffusion takes longer than many expect. Even clear innovations like electricity took 40 years to fully integrate into the global economy.
  • Markets are historically slow to react to new information, which creates a window where investors can profit before the market fully adjusts.
  • AI is more likely to save humanity than destroy it because it provides tools to solve other major global risks.
  • Mass surveillance may return humanity to a small village state where privacy is non-existent, reversing a temporary era of urban anonymity.
  • Elite performance often requires a visceral hatred of losing, but lacking this trait can lead to a happier and more productive life.
  • The United States is unique for its win-win culture where people are generally supportive of others' ambitions rather than viewing success as a zero-sum game.
  • Conspiracies are difficult to maintain in the United States because secrets are nearly impossible to keep in a transparent society.
  • The distinction between intuitive System 1 and analytical System 2 thinking may not be a valid way to model human cognition.
  • Small teams are becoming increasingly powerful, with an 84% chance that a unicorn startup with three or fewer people will exist by 2035.
  • The risk of AI-driven unemployment exceeding 8% by 2030 is less than 1%, suggesting that technology is unlikely to disrupt the labor market as severely as some fear.

Podchemy Weekly

Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.

Addiction as a continuous arms race

01:00 - 03:14

Igor describes a theory that human addiction to technology is a temporary problem. This perspective suggests that humans naturally prefer long term fulfillment over short term rewards. It argues that capitalism and better technology will eventually help us beat addictions, similar to how we used education to fight alcohol abuse or new medications to treat obesity. Tyler is not convinced by this optimistic view. He sees the struggle against addiction as an ongoing arms race rather than a problem that will be solved once and for all.

In the medium run you're always beating back some addictions, but others are getting worse. So sports betting for example, has been getting much worse over the last several years and sports betting has been with us since we've had sports, but due largely to the Internet and changes in state level laws in the United States, it's a far bigger problem than say 15 years ago.

Tyler suggests that society experiences alternating addictions. While one issue like fentanyl or alcohol might become less of a problem, others like sports betting or digital content consumption take their place. The realistic goal is to hope that the addictions we are left with are the least harmful ones. For example, Tyler views an addiction to mineral water or playing chess as much better outcomes than being addicted to opioids or TikTok.

I think it's more alternating addictions. And you hope that the arms race breaks in such a way where the addictions you're stuck with are the least harmful ones. But I think that's more or less coincidental or random.

Social pressure and the cycle of addiction

03:14 - 06:16

Addiction is characterized by a compulsion to keep doing something even when it creates a disadvantageous situation. However, a desire to stop is not always a requirement for a substance to be addictive. Coffee serves as a prime example of an addiction that most people do not regret having. While people are genuinely addicted to it, they rarely wish they had never discovered it.

There are people addicted to coffee who are genuinely addicted, but they don't actually want to have never discovered coffee. That's true. Coffee is actually a great addictive substance where very few people at 11:00 p.m. and in the evening go, come on, just one last coffee.

Markets frequently oversupply addictive substances. While education about health risks helps reduce use, social pressure and legal nudges are often more effective tools. The dramatic decline in smoking was driven by a shift in what was considered socially acceptable. For a long time, people knew smoking was bad, but the real change occurred when it began to be viewed as a dirty or unacceptable habit in public spaces like airplanes.

The notion that smoking was a kind of dirty, unacceptable thing to do in some situations finally caught on. I think that mattered more than education.

Breaking an addiction often has a contagion effect. When one person in a social circle or a marriage stops, it encourages others to follow suit. Unfortunately, the most severe addictions sometimes end through a process of burnout. Tyler notes that addiction cycles can reach a natural end when the most vulnerable people are disabled or taken out of the system entirely, which is a high price for society to pay.

The complex impact of social media on mental health and productivity

06:17 - 12:27

The link between social media and the mental health of younger generations is a subject of intense debate. While many blame smartphones for a decline in well-being, Tyler points out that many specialists find the actual correlation between social media use and happiness to be tiny or nonexistent. He suggests that social moods shift throughout history in ways that are rarely fully understood. For example, the years leading up to World War I showed a significant increase in pessimism despite the lack of modern technology.

When you look at actual studies of how much social media use correlates with happiness, the effects are tiny and very often zero. I think we don't know why moods have shifted. I think we observe from history moods shift all the time in ways that we never end up explaining.

Igor describes the current digital landscape as dopamine capitalism. He notes that business models are built to exploit dopamine spikes in developing brains. This creates a struggle for individuals who try to balance work and entertainment on the same device. Using a smartphone for work can feel like walking through a casino just to get to an office. The entertainment and the work are so deeply intertwined that it is difficult to separate them.

It seems to me a little bit like it is as if anytime you go to the office, you have to first pass through a casino and they put alcohol into you and you drink and it is like, sorry, that is the only path to work that we have on offer for you.

Despite these concerns, Tyler argues that the benefits of social media are often overlooked. He finds that platforms like Twitter are essential for learning and professional networking. He suggests that many highly productive people continue to use these tools because they provide the best information available. The net effect of social media might be positive when considering the connections built and the knowledge gained through these platforms.

Productive uses of social media and the cost of online debates

12:28 - 13:42

Food is generally excellent, which is why people sometimes overeat. There are almost no meals where a person under eats because the quality is so high. A similar pattern exists with how productive people use social media platforms like X. These individuals use the platform as an information highway and a way to connect with others. They tend to avoid using it for status games or presenting a specific version of themselves.

The people you're thinking of as generally being productive don't use X for social status games as much. They use it more for an information highway and connection with individuals rather than presentation of oneself.

Tyler notes that engaging in debates on social media is often a mistake. People who get into arguments online consistently end up less happy. Avoiding these debates is a better strategy. If the biggest addiction problem in society is using X, then things are going quite well.

I think it's a mistake to get into debates on X. I think from that people end up less happy on average, pretty consistently. So just don't do it.

Wealth Plus as a solution to collective action problems

13:42 - 17:26

Igor points out that current economic systems often compress complex information into single metrics. This leads to collective action problems known as Moloch traps. For instance, a logging company might value a rainforest only for its lumber price, ignoring its broader environmental benefits. To address this, Tyler proposes the concept of Wealth Plus. This approach suggests that while maximizing GDP growth is often good advice, we must account for important values that GDP fails to capture.

I think we should discount social welfare at a zero rate. That is, we should not stop caring about the future just because it is far, far away.

A key part of this framework is discounting social welfare at a zero rate. This ensures that the distant future is treated with the same importance as the present. By doing so, societies can leverage the power of compounding returns over decades. There are risks to this model, as history shows that growth maximization can be used to justify harmful policies. However, Tyler argues that even an imperfect focus on growth leads to better incentives for innovation, science, and long-term economic health.

If you try to implement growth maximization in a world of highly imperfect people, you still at the margin largely get good incentives. You will have governments more worried about doing right by their economies, supporting innovation and science more.

State capacity libertarianism and the success of the American South

17:27 - 21:13

State capacity libertarianism suggests that for markets to thrive, a state must be powerful enough to protect property rights and build infrastructure. This is a trade-off. For capitalism to survive, the government needs enough capacity to defend against foreign aggression and maintain public goods. While this implies a larger state than many libertarians prefer, Tyler argues it is a necessary deal to maintain a functioning society. Historically, even thinkers like Mises and Hayek offered advice to governments that was more practical and less strictly libertarian than their core ideologies.

The point I meant is this, that if you want to make sure your markets persist and that capitalism survives, you need a strong enough state to protect it from outside parties, hostile aggression from abroad, but also to build out some infrastructure and to protect property rights.

In the United States, several state governments demonstrate how to increase capacity without becoming bloated. States like Texas, Florida, and parts of the mid-South have improved significantly over the last forty years. Their laws are more fair, their public goods are better, and their infrastructure is superior to what it once was. Many people from coastal areas assume the country is struggling, but life in these states remains functional and stable.

The success of these regions often comes down to competition. States must compete to attract residents and manufacturing. This competition encourages lower taxes, predictable governance, and the freedom to build. In places like New York or California, getting a permit is difficult. In the South, it is much easier to build new homes and apartments. This makes housing more affordable and supports continued economic growth.

Innovation and the obstacles to solving climate change

21:15 - 26:39

A carbon tax is often seen as the ideal solution for climate change. However, Tyler believes it is clear that this will not happen. Instead, the focus must shift to other solutions. While making it easier to build nuclear power plants would help, it would not fully solve a global problem. The path forward likely requires a combination of innovation and regulation, though Tyler views regulation as a villain in this context. Regulators and the environmental movement killed nuclear power in the US, which was a massive mistake. The most realistic solution is to innovate until green energy becomes cheap enough that people naturally want it.

What I see is a lot of the regulations we're doing make it harder to build green energy. So I don't think of regulation per se as the solution. I think of it, if anything, as a villain.

There is a risk that even as solar energy takes off, carbon emissions will remain high. The goal should be tracking whether carbon energy is declining rather than just checking if solar is increasing. History shows that energy sources often stack rather than replace one another. For example, the world burns more wood for energy today than it did in the 19th century. There is a scenario where every energy source grows, and the planet still warms.

The real danger is this is scenario where everything's an upward sloping line, including the good stuff, but we fry ourselves.

AI will contribute to rising energy demands. While it currently accounts for a small percentage of global energy use, Igor notes that this could grow significantly. Even a 5% share of global power is substantial. The hope is that highly capable AI agents will eventually help find better solutions, such as more efficient direct air capture, to offset the higher energy costs. Currently, carbon taxes fail because they raise costs for voters who do not trust their governments. People suspect politicians will treat the tax as a new revenue source for themselves rather than using it to fix the problem.

The geopolitical risks of solar geoengineering

26:41 - 29:36

Artificial intelligence remains significantly undervalued by the general public. While it is a major topic in tech circles, many people in major cities remain unaware of its full impact. Beyond AI, climate engineering is a critical and misunderstood area. It is likely that the world will eventually resort to solar geoengineering. The technical feasibility is one thing, but the real challenge lies in geopolitics. Nations might start implementing these solutions regardless of global consensus.

I think we will have to resort to it. I do not have the expertise to know how well it will work, but it will probably be necessary and I am pretty sure we are going to try it. My main worries are not technical but geopolitical.

Solar geoengineering involves changing the amount of sunlight that stays within the atmosphere. This process manages what is known as radiative forcing. To many people, the idea of blocking sunlight sounds like something a supervillain would do. However, Tyler suggests that convincing the public may not matter because individual nations will likely act on their own. This creates a risk where different countries have conflicting visions for the global temperature. Russia might prefer a warmer climate to melt permafrost and create more usable land. If one country tries to cool the earth while another tries to keep it warm, it could lead to a geoengineering conflict. Even if the science is mastered, the social and political variables are too complex to forecast accurately.

Let us say we start doing something and then Putin decides he has his own vision for how warm the average temperature should be and he does something else. We get into some kind of war of geoengineering and I am not sure that is stable.

Pharmaceutical pricing and the need for healthcare transparency

29:37 - 35:46

Pharma is one of the most effective ways to save lives. Supply is elastic. This means that paying more for things leads to getting more of them. This applies to current treatments for HIV and future cures for cancer and dementia. Material needs for many people are already met. The things that actually cause misery are medical conditions. Society should be willing to spend more to solve these problems because millions of lives are at stake.

Supply is elastic. You pay more for things, you get more. Whether it is GLP1 drugs or fixes for HIV, AIDS, or in the future it will be cancer and dementia, we ought to be paying a lot more for those problems. The things that actually make you suffer are miserable. It is where we should be spending more money, not less.

Price discrimination is often seen as unfair because Americans subsidize the rest of the world. However, this system drives innovation. Tyler explains that the richest markets have the most money and the highest demand for new cures. By paying more, they ensure better products are developed. This also grants them preferential distribution. While it is not fair, it is a practical way to get things done.

The richest markets pay the most. No, it is not fair. But you get much better final cures and products and pharma and everything else. I am all for price discrimination, even though Americans such as myself in some cases are the biggest losers. But we are the ones who want it the most. We have the most money.

Most prescriptions in the United States are generics. For these drugs, Americans pay less than the average for developed nations. The frustration people feel may actually be about the lack of transparency in the wider healthcare market. Igor shares that it is often impossible to get a price for medical treatment before receiving it. This lack of transparency is a choice by providers. Other countries and private markets in Latin America prove that posting clear prices is possible. Tyler believes fixing this should be a major national priority.

Truly force them under severe penalties that there has to be a posted price and it reflects what you actually get. I know the quality dimension varies and other complications, but I know it can be done. We have seen it done in other parts of the world.

Reducing healthcare costs through AI and market entry

35:46 - 36:42

Healthcare costs often appear inflated when compared to the price of the technology itself. One ultrasound procedure can cost as much as buying a secondhand ultrasound machine. If patients could obtain their own imagery and use advanced AI models to interpret it, the cost of a single scan would equal the price of owning the equipment. This suggests that the medical system is not properly optimized for efficiency.

I had one situation where I had to get an ultrasound and the ultrasound ended up costing about $5,000. And then I looked up what an ultrasound secondhand machine costs, which was also around $5,000. So if you then get the imagery and feed it into an advanced AI model, I think you kind of get break even.

To fix these issues, Tyler argues for structural reforms. This includes allowing freer entry into the medical market and addressing the complications caused by third party payment systems. While these changes are difficult, they are possible if there is a collective national effort to reform how healthcare is delivered.

GDP, consumer surplus, and the future of AI

36:44 - 41:07

When designing new incentives for an industry, the most valuable step is to constantly improve the quality of your peers and find better mentors. These two pieces of advice are almost always applicable because complex systems require relying on the experiences of others. Success often depends on the strength of your network. Tyler explains that this applies even when navigating the complications of new economic models.

Understanding the health of an economy requires looking at the difference between GDP and consumer surplus. GDP measures the value of final goods and services sold, acting as a rough guide for living standards. Consumer surplus is the personal gain you receive beyond the price you pay. For example, if you pay seven dollars for ice cream that is worth fourteen dollars to you, your surplus is seven dollars. While these are different metrics, they usually move together. Countries that produce a lot also tend to make people happier by lowering prices and increasing market competition.

The U.S. economy overall very likely creates more consumer surplus than anywhere else, any large country at least.

There is a common belief that digital products like Wikipedia or Google Maps cause consumer surplus to grow much faster than GDP because these services are free. However, many of these digital tools eventually show up in GDP by making workers more productive. GPS helps delivery drivers move faster, which boosts the economy. AI follows a similar path. Currently, many people use AI for fun or to save time, which creates high consumer surplus but does not yet show a massive impact on GDP. This will likely change when AI agents can perform specific workplace tasks.

I think over time there will be more and more of a shift to GDP as agents become things you can actually use, which is not currently the case. There is not really an agent I can use to book my plane ticket, but there will be pretty soon.

The economic impact of AI agents and human bottlenecks

41:07 - 49:11

Tyler predicts that AI will increase the annual GDP growth rate by roughly half a percentage point. While a shift from 2% to 2.5% growth sounds small in a single year, it compounds into a massive difference over several decades. This suggests that while an AI-intense world will eventually look very different, the transition might feel unremarkable to most people while it is happening.

The binding constraint will be speed of human adoption, not how good the agents are. Once we get to this point, and I'm not sure what the speed of human adoption will be, but I typically see it in history as having been quite slow.

Igor suggests that by 2026, data center capacity could support one million AI agents performing at the level of a junior engineer. If these agents reach a senior level and data centers continue to expand, the potential productive value could reach trillions of dollars. However, Tyler argues that the true economic gain is measured by what human labor does once it is liberated by AI. He believes the marginal value of more digital goods is relatively small compared to the value of human motivation.

You could have the world's best tutor for a kid who doesn't want to learn. And I think you're still mostly stuck. The kid might want to learn a bit more because AI is fun, but I don't think it will really solve that problem.

While AI may decouple from human bottlenecks in closed systems like chess or programming, the broader economy remains tethered to human interests. Many of the most significant problems in life relate to motivation and attitude. Even with the arrival of superintelligence, society may not know how to effectively utilize that genius output for a long time because the bottleneck is often the humans themselves.

The impact of AI on productivity and human roles

49:11 - 55:19

Pharma development is like a chemical reaction where the slowest stage sets the pace for the entire process. Even with AI simulation, drugs must pass through clinical trials and regulators. These human and legal constraints remain the binding limits on progress. Tyler expects development time could be halved within twenty years. This shift could help grandkids live to ninety seven and prevent governments from going broke.

Whatever you create, you've got to get through clinical trials, get through the regulators. There's always a quite slow stage in the process somewhere where you don't just do it all by simulation. And the more you can do by AI simulation, the more binding the stupid constraints become.

AI will increase productivity, but GDP growth may only rise by one percentage point. Software engineers might be worse off unless they become masters of the technology. However, smart laborers like carpenters and gardeners will likely earn much more. Tyler has adapted his own career by investing more in charisma, travel, and philanthropy. He is focusing on podcasting and mentorship because people still want to hear from actual humans.

I'm working now on what I expect will be my last book, and it's a book that will be titled Mentors. Even if the AI could write that book as well or better than I can, I think people want to read a book on mentorship from an actual human who has done it.

The medium for information is shifting. Many readers will eventually prefer an interactive AI box to a traditional book. Short form rapid responses to world events will remain valuable for established brands, but it may be a difficult path for young people just starting out.

The economic landscape of superintelligence

55:23 - 1:01:39

Tyler remains skeptical about the necessity and popularity of Universal Basic Income. A recent study funded by Sam Altman indicated that UBI did not increase happiness and was often perceived as unfair. Tyler suggests that smaller, high-trust societies might adopt it, but it does not align with the culture of large, diverse economies like the United States. Instead of mass unemployment, he anticipates a future where superintelligence drastically reduces the cost of essential services.

If superintelligence cannot tell us how to build houses more cheaply, I would say it is not superintelligence.

In this view, the real benefit of advanced AI is its deflationary pressure on healthcare, food, and housing. While some individuals might become incredibly wealthy by running billion dollar companies with just a few people, the broader population will benefit from significantly lower living costs. Tyler argues that humans will still find plenty of work in areas requiring personal attention or physical presence, such as gardening, carpentry, or participating in clinical trials for new drugs.

The timeline for these changes is a point of debate. While Igor suggests that digital agents might transform the economy within five years, Tyler believes that full diffusion takes much longer. He points to electricity, which took 40 years to become fully integrated into society despite being a clear benefit. If a massive economic shift were truly imminent, Tyler argues that current market prices would reflect that level of disruption.

If I am wrong and your view is correct, you can make so much money investing in markets today that you should not be doing anything else. Market prices look very normal along all dimensions for any asset you look at.

Market sluggishness and the future of interest rates

1:01:40 - 1:02:11

Markets are often slow to respond to significant shifts. This historical sluggishness is why investors can still find profitable opportunities. If a major change is coming, the market likely has not priced it in yet. Tyler mentions holding a semiconductor ETF that has performed well but likely remains underpriced relative to its future value.

Markets have historically been very slow to act on things. That is why you can make so much money now, because the market would be slow to act.

If these predictions about economic growth are correct, real interest rates should rise significantly. Igor suggests this change will likely become evident within the next five years as the market finally adjusts to the new reality.

Scientific rigor in AI risk assessments

1:02:13 - 1:06:32

Tyler argues that AI is more likely to save humanity than destroy it. While many people focus on the extreme downside risks, he points out that AI helps address many other global problems. He compares the technology to electricity. We should work to make it better and safer, and he believes we will do exactly that as private businesses and labs implement their own checks and balances.

If you really think your argument is correct, take it to peer review and scientific journals and try to publish it and do what the climate change people did, which in fact did persuade nearly everybody because of peer review and the evidence.

Tyler is critical of how current AI risk arguments are presented. He sees many people writing long blog posts with dozens of different points instead of presenting a clear model. He argues that if an idea is truly important, it should be able to pass through the peer review process. This was the case with climate change and nuclear weapons. After nuclear weapons were invented, intense scientific and social investigation began. This helped the world understand and manage the new technology.

I am not saying we should do only peer review. I am saying when I see an idea that will not do peer review at all like the anti-vaxxers, I get very, very suspicious.

The trade-off between AI risks and surveillance

1:06:33 - 1:10:39

The rise of Large Language Models lowers the intellectual and financial barriers for bad actors to access dangerous information, such as instructions for synthetic biology or chemical weapons. Tyler suggests that while this is a concern, it has not been particularly difficult for people to find ways to create harm for a long time. He points to the sarin gas attacks in Tokyo over thirty years ago as evidence that the knowledge already exists. His deeper concern is that evil groups will use AI for something far more mundane but effective: basic management and social advice. If these groups learn how to manage their members better and avoid internal purges, they become much more dangerous organizations.

My real worry is simply that evil terror groups will turn to LLMs for basic management advice and they will say things like don't purge all your members, don't act like a jerk and 10 other things that are like beyond stupid but actually will make them far more effective.

In response to these threats, Tyler predicts an arms race where law enforcement and the good guys will use AI more effectively than bad actors. This likely leads to a world of total surveillance. While Igor expresses concern about a form of anarcho-tyranny where law-abiding citizens are surveilled while criminals find ways around the system, Tyler believes surveillance will ultimately win. This shift could represent a return to an earlier stage of human history. For thousands of years, humans lived in small villages with no privacy. The privacy enjoyed in modern big cities may have been a temporary stage in human evolution.

In some ways it will be like a return to early human society where you lived in small villages, you didn't have any privacy for tens of thousands of years or more. And it wasn't pleasant in various ways, but we're biologically somewhat used to that.

The sociological nature of AI doom arguments

1:10:42 - 1:15:14

Markets are often seen as the primary tool for acquiring information about future risks, but they can be slow to react. During the early days of the pandemic, experts recognized a high risk while markets lagged significantly. When discussing AI, Tyler points out that despite the high intelligence and incentives of people in finance and national security, very few of them buy into the idea of a rogue superintelligence destroying humanity. He suggests that belief in such extreme outcomes is largely a sociological phenomenon tied to specific social networks in San Francisco and the Bay Area.

I've never met one of those people who buys the Doomer argument. Not one. It is a person's proximity to a certain network of rationalists, San Francisco people, Bay Area communities that predicts whether they believe in the Doomer argument.

Igor suggests a different risk: the gradual loss of human oversight as more of the economy is mediated by AI agents. As decision making shifts to these systems, we might optimize for a future we did not actually want. Tyler argues that this lack of control is a fundamental part of human history rather than a new problem. To him, the trade-off for significantly more wealth and a much longer life is a bargain worth taking, even if we are not fully in charge of the outcome.

That is the whole history of civilization in my opinion. We never have been in control. I think there is this big illusion like we will be deposed somehow from controlling what happens. To me, it is no different.

AI and the risk of rising externalities

1:15:14 - 1:16:32

The speed of AI development raises concerns about how we handle existing problems like climate change. If AI increases production rates without a better way to manage externalities, the situation could worsen. Tyler acknowledges a scenario where energy use rises across many dimensions. This could lead to a point where we might face severe environmental consequences.

It's not the end of the world, but it's pretty bad and ugly. And I don't rule that out. I'm not sure if it's probability, but it certainly wouldn't shock me if it happened.

While AI data centers contribute to this energy demand, Tyler suggests these problems might have occurred anyway. He does not see the AI itself as the primary culprit. Instead, there is a chance that these smarter systems could eventually help find a solution to the very problems they might temporarily aggravate.

Nuclear risks and the limits of AI persuasion

1:16:33 - 1:23:32

Economic and environmental problems like the depletion of fish populations often stem from a lack of property rights and international coordination. While AI might accelerate these trends, the fundamental need for systemic design changes remains the same with or without new technology. Tyler suggests that these collective action problems are persistent challenges we must address through better policy rather than blaming technology itself.

Nuclear war is a far more immediate and certain existential threat than hypothetical AI scenarios. It is a simple mechanism that can destroy most of the world, and there have already been several close calls. Tyler argues that the cumulative probability of a nuclear accident or conflict over many years is significant.

I think the biggest worry is simply that we have a nuclear war again. This can easily happen without AI, and that it gets out of control, just maybe by accident, even. The chance of that in a given year is quite small. But if you run through enough years, the cumulative probability becomes much higher.

The concept of superintelligence is often treated as a discrete milestone that will grant one country an absolute advantage. Tyler views this as a mistake. He believes progress will be gradual rather than a sudden event like the invention of the atomic bomb. Even if one nation develops more advanced AI, it is unlikely to magically neutralize the nuclear deterrents of other nations. Igor suggests that AI could eventually become a super persuader capable of manipulating leaders. Tyler remains skeptical of this, noting that humans are difficult to persuade to take extreme actions against their own interests.

I worry when people use super intelligence as a discrete stage that is somehow reached on a day, and then one place has this insuperable lead. I doubt if there's any clear line. And the clear line can cause you to make a lot of errors.

AI manipulation and the resilience of voter preferences

1:23:32 - 1:28:43

Tyler argues that many significant political events are the result of normal political miscalculations rather than foreign interference or social media manipulation. Igor suggests that AI could become a powerful tool for persuasion by finding specific phrasing that resonates with certain demographics. He believes this could lead to highly targeted manipulation of both the general public and specific leaders. Tyler is less concerned, noting that voters have fundamental preferences for things like low inflation and job security. He believes AI is unlikely to change these core desires.

Most voters have pretty gross preferences. They want low inflation, a good job, political stability, opportunity for their children. If the AIs could talk people into not wanting those things, I'd be very worried, but I really don't think they can.

Tyler also points out that AI models might be more objective than traditional media outlets. While platforms like TikTok may influence which topics become popular, this reflects a long-standing issue in media where certain global events are given more attention than others. Tyler suggests that even if AI bias exists, it is a minor worry compared to the potential for AI to provide a more neutral perspective than partisan news sources.

AI diffusion and societal adaptation

1:28:44 - 1:30:47

The security dynamics of offense and defense are not likely to spiral out of balance just because of AI. AI improves both sides of the equation. If humanity cannot handle the presence of more intelligence, it is likely doomed regardless.

If we can't handle having more intelligence around, I strongly suspect we're doomed anyway.

Society functions as a complex adaptive system. While one area might grow rapidly, the system generally finds ways to adjust. Igor suggests that if technology moves too fast, the rest of society may not have time to adapt. However, Tyler views diffusion as being slow enough to allow for these adjustments. This slow pace gives governments and ecosystems the necessary time to respond. Rapid diffusion is typically limited to self-contained systems.

The difference between obsession and competitiveness

1:30:47 - 1:34:32

Tyler describes himself as obsessive rather than competitive. He lacks the urge to compare himself to others or feel mistreated when someone else succeeds. This lack of competitiveness makes him happier and more productive because he avoids negative thoughts about social media status or relative success.

I don't wake up in the morning feeling mistreated or like someone else did better or they have more Twitter followers. All that is so foreign to me. I wake up every morning, I'm productive, I'm pretty happy. I get to it. I just don't have those negative thoughts.

This lack of a competitive drive led Tyler to quit chess early. He realized he did not truly hate losing, which is a common trait among elite players. For champions like Kasparov or Magnus Carlsen, losing feels like torture. Tyler found he was too philosophical about defeat, which suggested he should pursue a different path.

I quit chess young in part because I realized I wasn't competitive enough. I didn't truly hate losing. And that doesn't work. You see like Kasparov, Magnus, man, losing to them, it's literally a torture. I was very philosophical about losing, and that means you should be doing something else.

Igor notes that most top poker players share this visceral hatred of losing. However, this drive can become a trap where an identity is tied entirely to the game. When Igor stepped away from professional poker and returned later with fresh eyes, he found he could play better. By letting go of the pressure to be a perfect game theory player and relying more on intuition, he rediscovered the beauty of the game without the burden of his old identity.

Competition and the role of AI in social media well-being

1:34:32 - 1:39:57

Competition is an innate part of human nature. Some people are born more competitive than others, and trying to change that distribution might not be the goal. While extreme competition can lead to zero sum dynamics, it also produces public goods when individuals use that drive to create. Tyler prefers low stakes ways of getting to know people, such as traveling or deep conversation, rather than structured games. Traveling is a particularly effective way to see a person's true character.

I'm not looking to dampen it really. It gives a lot of external public goods when people create. I don't think people should feel bad about being competitive or too competitive. It's a way of being. There's some value to that authenticity.

Social media often highlights the negative side of competition, especially when people compete for followers or status. This creates an external locus of evaluation where self worth is tied to how one compares to others. Igor suggests this leads to a race to the bottom that harms well being. However, social media also serves the essential purpose of keeping people in touch with their networks. Much like family relationships, these ties might not always make a person happy in the moment, but they are still desired and necessary.

AI might offer a way out of these addictive or negative social patterns. Instead of being controlled by algorithms designed for maximum engagement, individuals could use their own AI to monitor their behavior. This personal AI could identify which habits lead to unhappiness and offer to take over those tasks or suggest changes.

It'll monitor your social media presence, tell you what you're doing, makes you less happy, and then offer to do some of it for you. I'm not convinced that's what we'll get, but it's one of the hopes I hold.

There is a concern that engagement focused algorithms will always beat out health focused ones because they capture more revenue. Tyler suggests that in the future, the feeling of needing to stay current to avoid missing out will fade. AI will be able to provide digests of relevant information on demand. This shift could force social media platforms to focus more on making users feel good rather than just keeping them trapped.

The win-win mentality of the United States

1:40:00 - 1:41:36

Cultures vary in how they view success. Some operate with a zero-sum mindset where one person's gain is seen as another's loss. Others embrace a win-win mentality. In these cultures, people feel genuine happiness for the success of others and excel at coordination. Among more than 100 countries, the United States stands out as the primary example of this win-win spirit.

There is no other place where people are okay with just people can be very ambitious, explicitly so. And other people are actually in favor also and want someone else to win.

This cultural trait is rooted in the founding principles of life, liberty, and the pursuit of happiness. When a society supports individual ambition, it leads to the growth of the entire pie rather than a struggle over how to divide existing resources. This openness to the success of others creates a unique environment for opportunity and wealth.

Tyler Cowen on UAPs and the probability of alien life

1:41:38 - 1:44:43

Tyler considers the possibility that UAP data represents alien drone probes. While public evidence is often weak, private conversations with pilots and individuals with high security clearances suggest phenomena that defy easy explanation. Tyler notes that with so many habitable planets, it is not unlikely that we are being observed by machines from another world.

The evidence I see in the public sphere is not at all convincing. But when I talk to people with very high security clearances or people who have flown F16s or done other related things, they tell me things that cannot be accounted for.

The conversation turns to the Fermi paradox and the likelihood of advanced civilizations. Igor mentions research suggesting a high probability that humans are alone in the universe. Tyler remains skeptical of these findings and believes there is likely life elsewhere, perhaps even on the moons of Saturn or Jupiter. He argues that the government likely does not have all the answers and that the public should remain open to unexplained data.

We do not know often when someone is communicating with us. I just think it could go either way and there is clearly some data we cannot explain. I do not think there is any conspiracy. I think the government itself does not know and we ought to keep an open mind.

Tyler Cowen on skepticism and conspiracy theories

1:44:43 - 1:46:13

Tyler rejects most unconventional or supernatural beliefs. He does not believe in UFOs, ESP, or other unexplained phenomena. Even regarding the Kennedy assassination, he remains unconvinced of a second shooter. His lack of belief in conspiracies is rooted in the practical difficulty of keeping secrets within the United States. When the government released the Kennedy files, no evidence of a secret plot appeared.

I hardly believe in any conspiracy theories. If you're talking about totalitarian countries, I do think there can be many more conspiracies, but they're not things that people here talk about. The ones people talk about for here, it's just too hard to keep secrets. They dumped the Kennedy files. We didn't see anything. I don't believe in much in that direction.

While Tyler is skeptical of domestic conspiracies, he views totalitarian countries differently. In those environments, secrets are easier to maintain. He suggests there is a reasonable chance that Joseph Stalin was poisoned. For Tyler, the only major domestic conspiracy he finds plausible is that Pfizer may have timed vaccine announcements to influence the 2020 election results.

Tyler Cowen on the flaws of dual process theory

1:46:14 - 1:46:54

When asked to participate in a series of rapid fire predictions, Tyler expressed skepticism toward the popular psychological framework of System 1 and System 2 thinking. This model, which distinguishes between fast, intuitive thoughts and slow, analytical processes, is often used to describe how people make decisions. However, Tyler rejects this division entirely and suggests that it does not reflect how he or most other people actually function.

I do not believe in system one and system two. You are just going to get the only system I have got. Most of the Kahneman research has not held up well.

This critique points to broader issues in the field of behavioral economics. Much of the work popularized by Daniel Kahneman has faced replication issues or has otherwise failed to remain robust under scrutiny. Rather than switching between different cognitive systems, Tyler argues there is simply one integrated way of processing information and making judgments.

Tyler Cowen on AI milestones and future economic growth

1:46:56 - 1:49:52

Tyler predicts a high likelihood that extremely small teams will build billion-dollar companies in the near future. While companies like Midjourney have already reached massive scale with fewer than ten people, the threshold is dropping further. There is an 84% chance that a unicorn startup with three or fewer people will exist by 2035. The probability remains high at 76% that there will be at least ten such companies by that same date.

You know, we had Midjourney with seven or eight people, which is more than three, but we're almost there already.

Artificial intelligence is also expected to reach major academic milestones quickly. There is a 75% chance that an autonomous AI agent will publish a peer-reviewed paper in a top science journal with no human co-authors by 2030. However, the timeline for AI gaining significant global influence is much longer. Tyler places the probability of AI systems exercising more than half of effective global decision power at less than 1% for the year 2050, though that number rises to 15% by 2080.

Economic growth in the United States is unlikely to see a massive surge in the coming decade. The probability of the real inflation-adjusted GDP growth averaging above 4% between 2030 and 2035 is quite low. Such high rates typically only occur as a bounce back after a major depression or natural disaster, similar to the recovery seen after the pandemic. Without a crisis to rebound from, expected growth is closer to 2.7%.

Predictions on AI unemployment and life expectancy

1:49:52 - 1:51:45

The risk of US unemployment topping 8% specifically because of AI or robotics by 2030 is extremely low, likely below 1%. While high unemployment could occur due to monetary policy errors, technology alone is unlikely to drive such a spike in the near term. US life expectancy is also expected to continue its upward trend. While reaching a median age of 80 by 2030 might be too soon to predict, young people living today have a high probability of reaching age 90 by the end of the century.

I think young people today will see that with a pretty high probability. 80%. You've got to rule out nuclear war accidents. But I think most people will die of old age if we allow them to avoid true catastrophe.

Regarding global events and controversies, Tyler places the probability of the Covid lab leak theory at 50%. When discussing the death of Jeffrey Epstein, he estimates a 93% probability that it was a suicide rather than a conspiracy.

Tyler Cowen on the probability of UFO claims

1:51:46 - 1:52:14

Tyler considers the likelihood of secret government programs retrieving non-human spacecraft to be nearly zero. These claims include the idea that the US government holds multiple crafts and even the remains of deceased pilots. Tyler describes the probability of these events being true as minuscule.

The probability that that is true. Minuscule. Too small to even calculate.

It is so low that it cannot even be calculated. This perspective highlights a deep skepticism toward recent reports of extraterrestrial technology being hidden by the state.

Podchemy Logo