EconTalk artwork

EconTalk

AI, Employment, and Education (with Tyler Cowen)

Mar 30, 2026Separator23 min read
Official episode page

Tyler Cowen and Russ Roberts discuss why artificial intelligence is a vital tool for the future of work and higher education.

Cowen explains how AI can act as a personal mentor and why students should spend more time learning these tools to stay relevant.

This transition allows people to focus on their unique human strengths and find new ways to thrive in a changing economy.

Key takeaways

  • AI is already increasing leisure time by allowing workers to finish tasks more quickly, often without their employers realizing the increase in efficiency.
  • Open regulatory processes face the risk of being overwhelmed by high quality but pointless AI generated spam.
  • The primary risk of AI is its unpredictable impact on governance rather than direct economic collapse.
  • Automation in industries like trucking may happen slower than expected because human jobs involve complex tasks beyond the primary function, such as managing cargo and logistics.
  • The greatest political risk from AI may be the displacement of the upper middle class, as influential professionals face significant salary reductions and career shifts.
  • The true value of AI in education is using it to discover the right questions to ask for different contexts.
  • A radical new model for grading involves having one AI evaluate a semester-long chat transcript between a student and a different AI tutor to measure learning progress.
  • The traditional fifteen week semester is an artificial constraint that AI-driven learning can break by allowing students to move at their own pace.
  • Writing specifically for AI allows individuals to build a digital model of their own thinking that can be used by others in the future.
  • To manage AI cheating, schools can use occasional proctored sessions to establish a performance baseline for each student.
  • AI is likely already better than the average human at conducting interviews, though it has yet to surpass the very best human evaluators.
  • Many people underestimate AI capabilities because they only use free versions, which are significantly less powerful than high-end models.
  • Focus on messy jobs that require face-to-face interaction and non-routine problem solving to stay valuable in an AI-driven world.
  • Just as people at the start of the Industrial Revolution could not have predicted the job of a podcaster, we cannot yet see the unique roles that will emerge from the AI transition.
  • Writing instruction should include assignments that require AI to push for higher quality and assignments that ban AI to develop independent thinking.
  • Colleges can use AI to offer niche subjects that lack dedicated faculty, allowing students to explore specific interests at zero marginal cost.
  • AI will likely improve education by optimizing existing tutoring methods through data analysis rather than through specialized ed-tech software.
  • Professionals can adapt to AI by shifting their focus to activities that require a human presence, such as live events and podcasts.
  • Colleges should devote a third of their curriculum to AI because nearly every future job will require AI literacy.
  • AI will likely increase the number of billionaires, but the formation of new companies will create enough projects to prevent mass unemployment.

Podchemy Weekly

Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.

The impact of AI on job markets and leisure

02:31 - 04:53

While many worry that AI will lead to mass unemployment, Tyler argues that it will instead shift where jobs exist and what individuals do. The energy sector will likely expand because AI requires immense amounts of power. Similarly, the biomedical field will see growth. While AI can design new drugs and medical devices, humans are still required to test them and navigate the regulatory process. We are currently in a position similar to the start of the Industrial Revolution. It is difficult to predict future roles because the industries of the future do not exist yet.

If you go back to the early days of the Industrial Revolution and you tell people all these agricultural jobs are going away, would you have two people sitting around the campfire saying, 'Oh, yes, a lot more of us will become podcasters?' They would have no idea. So we are in that same position.

Counterintuitively, the legal profession may also grow. Governments will need to write many new laws to regulate AI. Human lawyers will likely lead this effort, even if they use AI as a drafting assistant. There is also an immediate effect on productivity that often goes unnoticed. Many workers use AI to finish their tasks more quickly, creating more leisure time that they do not report to their employers.

AI's role in governance and regulation

04:54 - 06:22

Russ questions whether AI can handle the trade-offs inherent in governance. Governance is often seen as an optimization problem. However, it fundamentally involves human values and compromises. Russ notes that Sam Altman has suggested governance must improve as AI becomes more central to the job market. Russ remains skeptical about whether AI can manage these human concerns.

I don't think AI is going to be good at that kind of thing. Trade offs, the kind of things we're going to care about as human beings. It's not a. You don't optimize governance. Governance is almost inherently about trade offs.

Tyler argues that current AI models like Claude or GPT might provide better governance answers than many current leaders. He expects AI to act as an aid rather than a replacement. A significant challenge is whether humans will actually follow the advice provided by these systems. Tyler believes the short-term impact on governance might even be negative.

There are immediate negative consequences to consider. The process for regulatory comments is already being overloaded by AI. These systems can generate high quality but ultimately pointless comments that overwhelm open feedback processes. This leads to an increase in digital spam within the regulatory apparatus.

I think in the short run some governance will be worse. Just imagine the process for regulatory comments being overloaded by high quality but pointless AI generated comments. I think we're already seeing this. So there'll be a lot more spam.

The impact of AI on governance and the economy

06:24 - 07:57

The primary concern regarding AI is how it might change governance in ways that are hard to predict. Tyler notes that political models are generally less reliable than economic models. If governance suffers because of AI integration, the economy will suffer as well. This remains a significant worry because the outcome of political shifts is difficult to map out.

My biggest worry is that AI will change governance in ways that are hard to predict. We have worse political models than economic models in general. It is possible governance becomes worse and if governance becomes worse, that is bad for the economy.

Regarding standard economic mechanisms, the outlook is more optimistic. AI is expected to create more wealth rather than destroy jobs. People at the lower end of the economic spectrum may gain access to services for free or at very low costs. While there will likely be more billionaires due to the ease of building large companies with small teams, these new ventures will create projects and jobs. Mass unemployment is not a likely outcome.

The challenge of governance and the future of work

07:58 - 11:21

Tyler identifies governance as his primary concern, regardless of whether AI is part of the equation. If political systems deteriorate, economies naturally follow. While economic models like the price system or comparative advantage offer a framework for understanding markets, politics remains far more unpredictable. There is no clear model for how voters will react to AI or what they will demand from their leaders.

If politics gets worse, economies become worse. There are plenty of negative mechanisms operating today. Most of them do not have to do with AI. But if you add AI into that mix and just see it as a big change where the people in charge may not regulate it well, there are just so many scenarios where things politically get worse.

Russ considers the possibility of a world with significantly more leisure time. He points to the famous essay by Keynes, which correctly predicted a massive increase in wealth but failed to foresee that people would continue working just as hard. Tyler suggests that even as automation advances, many jobs involve more than a single mechanical task. A truck driver, for instance, does more than just steer a vehicle. They must also load cargo and manage various points of contact along a route.

The transition to technologies like driverless cars will likely take closer to a decade rather than a few months. Human drivers might remain competitive because automation systems often carry higher marginal costs than expected when forced to handle every possible edge case. For those who want to remain active, the ability to manage complex projects will increase significantly.

If you want to work, your chance to control, manipulate and manage projects will be far, far higher than it ever has been in the past. And that will keep us busy whether it is for earning monetary income or not.

The economic and political impacts of AI disruption

11:22 - 16:05

Driverless cars represent a significant shift in transportation that could make many services cheaper and more efficient. While current models like Waymo are subsidized, the eventual accumulation of data by networks like Tesla could lower marginal costs even further. If driving jobs disappear, the overall economy may benefit from the resulting lower prices for goods and services. The transition to an AI-driven world will likely be positive as long as there is room for economic growth and human self-improvement.

It is relatively hard to get to a conclusion where most people are worse off. The goods and services are sold, if need be, their prices fall. The production, marketing, distribution of the goods and services generates income of its own.

Tyler argues that Say's Law is likely to hold in an AI-enriched world. This means the production of new and fascinating things will generate the income necessary for people to purchase them. However, Russ notes that work provides meaning beyond just income. A major concern is how the labor market will shift for highly paid professionals. Upper-middle-class workers, such as consulting partners, might see their incomes drop significantly as AI automates their tasks. This group is politically influential, and their displacement could create significant social tension.

Politically they will hate that. And rather than being say a consulting partner for 1.4 million dollars a year, they will be sent to Houston and they will earn 300,000 dollars a year. And politically they are a very influential group. So that I don't know how we survive that.

The consulting industry may experience a short-term boom as companies seek help integrating AI. In the medium term, the demand for these services will likely fall. While clients value talking to real humans about their organizations, firms may eventually provide those services with only a third of their current staff.

Integrating AI into the college curriculum

16:06 - 19:32

Higher education often moves slowly and resists change. Tyler suggests that a third of college course time should focus on using AI. This is not meant to replace traditional subjects. Instead, students should use AI to enhance their understanding of existing topics like Homer's Odyssey or biology.

When you're teaching people how to use AI well, it is not at the expense of teaching them other things. You can teach them how to use AI to better read and understand Homer's Odyssey. You are teaching them Homer's Odyssey at the same time, but you are teaching them the combination of Homer's Odyssey and AI.

The goal is to prepare students for a future where almost every job requires AI skills. Currently, these skills are scarce and rarely taught formally. A major obstacle is that many professors and administrators do not yet know how to use these tools themselves. Instruction should go beyond simple prompting. In a biology class, this might mean learning how to integrate AI systems into a laboratory setting. Often, the biggest hurdle is simply reminding people to use the tool in the first place.

Using AI to master the art of questioning

19:32 - 21:46

AI is a practical tool for managing the complexity of classic literature like the Odyssey. It can quickly generate character lists, track appearances in specific translations, and summarize traits. This helps readers maintain their focus on the narrative rather than getting lost in the details of a sprawling cast.

For me, the biggest value of it in reading the Odyssey is saying, give me a list of the characters, tell me what page they first appear on in Fagel's translation and tell me what their main characteristics are so I can keep them straight.

The deeper benefit of these tools is their ability to help us ask better questions. While an expert might automatically wonder about the oral tradition or the economic role of oxen in ancient Greece, most people need guidance to identify those lines of inquiry. Tyler points out that we can actually use AI to teach us how to question more effectively. By providing structure, we can ask the AI what questions are most relevant for a specific field, like archaeology, and learn how to follow up on those ideas.

You can learn it using the AI. I would ask them, what are the good questions? It is kind of easy, but you need to put more structure on it, like, what are the good questions? For which purposes? How do I follow up?

Managing AI and writing in education

21:48 - 26:17

The challenge of AI cheating in schools is not that it is impossible to detect, but that institutions are often unwilling to take the necessary steps to stop it. Tyler proposes a simple solution: require students to complete a small fraction of their work in a controlled, proctored setting. If a student's performance at home is significantly better than their performance in the room, it serves as a clear signal to investigate further.

But just take students and for 2 or 3% of their output over the course of their college career, lock them in a room and test them. And if what they're handing in and how they do on the test diverge dramatically, just call them in for a chat. I'm not saying send them to jail, but look into the matter.

This approach creates a strong incentive against cheating without needing to expel students or ruin their future careers. It also avoids the difficult task of trying to distinguish between a student's own voice and an AI's style, which will only get harder over time. Instead of viewing AI purely as a threat, Tyler suggests splitting the curriculum. Some assignments should require the use of AI, with the expectation that the final product will be held to a much higher standard. Other tasks must be done entirely without digital assistance to focus on the development of the mind.

Most people may not need to know how to write on their own for its own sake, but they will need to know how to think. And writing is a great path to thinking.

Writing and thinking are deeply linked. For many, ideas do not simply appear while showering or during idle moments. Instead, the act of writing or talking through a problem is what generates the insight. Russ notes that his ability to think comes directly from his ability to write. Using AI to skip the writing process might inadvertently skip the thinking process as well.

Using AI to expand the college curriculum

26:18 - 33:01

Education is often most effective when students struggle together to understand difficult texts like the Iliad or Aristotle's ethics. This shared struggle in small groups allows for a deep internalization of wisdom that a standard lecture might miss. While this traditional model is powerful, there is a case for introducing more competition and variety into the curriculum by using AI. Just as Adam Smith argued for competition between professors, colleges could allow students to take one class driven entirely by AI to see how it compares to the human-led experience.

Each year, let a student take one class with AI. No more than one. Just one. Or even one every two years. Just once and see what they think. If they don't like it, they'll just stop doing it. This is Adam Smith's point. Let people in your institution just try it once.

In an AI-driven course, the technology can handle everything from generating a syllabus and reading list to creating and grading quizzes. Tyler suggests a model where a human coach helps structure the class without needing to be a subject matter expert. A more radical approach involves a student simply chatting with an AI for fifteen weeks about a specific topic. At the end of the term, a second AI could review the entire chat transcript to determine what the student learned and assign a grade based on the depth of the conversation.

This method offers a way to provide niche subjects that a small college might not normally have the faculty to teach, such as Tudor England. AI can also adapt its teaching style to the student's level, acting as a personal tutor that can explain concepts as if to a novice or a college freshman. By allowing students to experiment with these tools, institutions can use a market-style discovery process to see where AI adds the most value and where human warmth remains essential.

The role of AI in redesigning higher education

33:01 - 37:50

AI offers a unique advantage because it never tires and is always ready for interaction. While using it for an entire semester might seem challenging, it excels at transferring information. For example, understanding how a cell works requires specific information that an AI can help provide. However, reading a text like the Iliad often benefits from human connection, especially for students who have experienced war. Learning alongside others who share similar experiences adds a layer of depth that a machine might miss.

The coach might be cheaper than somebody with a PhD in Classics. They won't insist on all kinds of other treatment, and they won't insist on passing on their own pet theory of the Iliad.

The current fifteen week semester structure is highly artificial. It forces teachers to fill time even when a topic might not require it. Tyler suggests that a model using AI allows for much more flexibility. A group of students could read a text in six weeks and move on to the next subject immediately. This model does not require students to learn in isolation. Groups can still meet to discuss their readings, and the AI can record these sessions to provide feedback or check for factual errors.

This technology also allows for personalized experiences that were previously impossible. Because so much data exists for certain individuals, an AI can create an excellent model of their thinking. Students could essentially take a class with a specific person, asking the AI what that person would say about a particular topic. This opens the door for every student to have a tailored educational experience with a virtual version of a leading expert or professor.

Adapting to AI in education and professional life

37:51 - 39:10

Tyler uses AI as a pedagogical tool in his History of Economics class. He recently assigned his students to learn the Ricardian model using AI before he lectured on the same topic. The goal was for students to compare the experience of learning from a machine versus a human. This highlights the unique value of face to face communication that AI cannot yet replicate.

I gave them an assignment last week. I said, use AI to teach yourself the Ricardian model. And then I said, this week, I'm going to go in and I'm going to teach you the Ricardian model. I want you to mentally compare how it did and how I did. That is a big part of the lesson.

Russ suggests that Tyler might still have the edge when it comes to humor, but Tyler acknowledges that many of his current advantages may fade over time. To stay ahead of AI competition, Tyler has already shifted how he spends his time. He focuses more on podcasting and personal appearances because AI competes more effectively with his writing than with these personal mediums. He has already adapted about half of his professional schedule to account for this shift.

I do much more podcasting because of competition from AI, which competes with my writing more than my podcasting. And I do more personal appearances, which the AI can't do at all. I have adapted at least half of my time usage already.

The shift from studying a subject to becoming something

39:11 - 45:02

Russ and Tyler discuss the future of AI in the classroom. Tyler believes specialized education apps will not be the standard. Instead, people will likely stick to basic foundation models. He teaches his students to use the base models directly because that is what they are already comfortable using.

I don't think the bells and whistles on top will be the equilibrium. So when I teach using AI, I just stress, not here is some company with a neat little thing that will walk you through, talk you through. Just here is the base model. Here is how to use it. That is what I think we will be doing.

Russ envisions a radical university model where students design their own curriculum. There are no fixed semesters or four-year limits. AI handles the tutoring while human coaches provide guidance and social interaction. This shifts the focus from learning facts to a deeper personal transformation. Russ points out that modern education often tells people what experts think rather than teaching them how to think for themselves.

Tyler suggests that AI can take existing tutoring systems and make them much more effective. By feeding data into AI, educators can finally see exactly what works. This data-driven approach allows for constant improvement that was not possible before. The goal is to move beyond the simple acquisition of knowledge toward a deeper mastery of how the world works.

The acquisition of wisdom is not the focus of most education. The phrase I used before is worth thinking a lot about: to become something rather than to study something.

AI and the future of educational certification

45:03 - 47:40

The goal of education at Shalem is to help students figure out what they want to become. This goes beyond just teaching skills. Many students arrive with an inchoate ambition to improve their country. The aim is to give them the education they need to make that difference. This is a unique goal compared to most American institutions.

It is a fabulous goal. It is what everybody would want if they believed it would work and if they believe they could still get a job. Our students do, and they do very well.

In the real world, learning does not happen in 15 week batches. People pick up new things as they go. They stop and start based on their own interests. Experts learn this way, yet they insist that students follow a rigid schedule. This happens because schools are in the business of certification. They must prove to employers that a student is competent. This need to certify often contaminates the learning process.

Tyler suggests that AI will eventually handle certification better than any school. A person could spend a day with an AI that tests their knowledge and character. This would provide a better look at their strengths and weaknesses than traditional letter grades. While schools may want to keep education and certification bundled together, Tyler believes that AI will eventually force them apart.

The AI can out compete us in certification easily. We are not doing that yet, but it is the future equilibrium. Just have a person spend a day with the AI. You will get great certification. Strengths, weaknesses, temperament, what they know, what they don't know. It is way better than these A's and B's.

Selecting for leadership and the role of AI in interviewing

47:42 - 49:59

Russ explores how to select for leadership by focusing on two main traits: intellectual aptitude and the ambition to improve the country. While a former Israeli military official suggested testing candidates by putting them in the woods for three days, Russ looks for more practical ways to evaluate talent. This leads to a discussion with Tyler about the role of technology in scouting for his philanthropy project, Emergent Ventures. Tyler observes that candidates are already using AI to predict his interview questions, which requires him to constantly come up with new ways to challenge them.

I think AI soon will be better than most human interviewers. It may well be already. I'm not sure it will soon be better than the best human interviewers. But again, if it beats most, we've gotten somewhere.

A potential issue with using AI for interviews is its tendency to be overly polite or obsequious. Russ points out that an interviewer needs to be objective rather than just agreeable. Tyler explains that this is a simple matter of configuration. Users can change the mood or persona of an AI to ensure it provides the necessary friction. Teaching people how to adjust these AI moods will likely become an essential part of modern education as the diversity of available models grows.

The gap between personal joy and social consequences in AI

50:02 - 51:17

The initial excitement of using new technology often masks potential long-term risks. Social media followed this pattern, where the personal pleasure of scrolling eventually gave way to concerns about how it might corrupt institutions. AI could face a similar trajectory. While the current experience of using it is extraordinary, it is important to consider if the social consequences will be as attractive as the personal benefits.

I get the kind of pleasure from using AI that I used to get when I first started using social media. And then I found out that social media is ruining the country and corrupting our institutions.

Most people currently have a limited view of what these tools can actually do. While there are many users, the vast majority rely on free versions. There are very few people using the high-end models, yet the capabilities between the two are very different. Understanding the true power of this technology requires moving beyond the basic models most people see.

51:19 - 55:02

Technological change is often met with resistance because people generally dislike disruption. However, historical shifts like the Industrial Revolution prove that while change is unsettling, it is also the primary driver of human progress. AI will likely follow this pattern. It will force a change in expectations about careers and the future, but the cost of stopping its advancement is simply too high for most societies to manage.

I said once in some other interview, the more people are upset, the better we will know that things are going. That was tongue in cheek, but there is some truth to that. It will just change expectations about what jobs will be like or what future your kids will have in a way that the people who are clued in will find quite unsettling.

Growth involves more than just the accumulation of material objects. It is the engine behind creativity, opportunity, and the liberation of women. There is a 0.95 correlation between GDP per capita and non-material gains like human rights and dignity. Poor societies are often less tolerant and offer fewer rights, making wealth a prerequisite for a better quality of life in every sense.

Very poor societies typically do not have a lot of tolerance, do not grant rights to women very readily. They are worse places to live, not just because they don't have the flat screen televisions, they are worse on human rights and dignity and most of the other things we care about.

For certain nations, technology is a requirement for security. Tyler argues that a country like Israel must remain on the cutting edge of AI to ensure its survival. While most small countries cannot maintain such high levels of innovation, those that do will see their geopolitical standing shift radically as they integrate these new tools. If a nation in a volatile region fails to adopt AI, its future is in jeopardy.

Advice for a career in the age of AI

55:02 - 58:36

Russ reflects on the changing value of a college degree. He notes that advice once considered essential, such as learning to code, is now outdated. Tyler suggests that young people should instead focus on AI and pursue messy jobs. These are roles that are hard to automate or turn into a formula. They typically involve a high degree of face-to-face interaction and the resolution of complex human issues. Human skills like empathy and listening will become even more precious as AI handles routine tasks.

In the AI world the premium will be on messy jobs where you do many different things that cannot be routinized or turned into formula and that involve a lot of face to face contact in solving difficult problems with or caused by other human beings.

People still crave creativity and the ability to grow in their careers. This mindset applies to everyone, not just those starting out. Tyler emphasizes that we should be willing to reallocate our own time and energy to stay relevant. He has done this by prioritizing travel and in-person meetings. Tyler believes that before advising others, we should attempt to live by that same advice.

Try living your own advice and then maybe give some more.

Tyler Cowen on writing for AI and digital models of thought

58:37 - 1:01:39

Tyler explains that he now writes specifically for AI models. He considers them his most patient and thorough readers because they understand the background of his work. By providing this information, he is building a digital model of his own thoughts. This model helps the AI understand what he wants and how he thinks. Tyler believes this will be useful for him now and for others in the future.

I write for the AIs. I think, what do the AIs need to learn and what do they need to learn about me? They are my best readers. They are very patient. They always understand the background to what I am saying.

Despite his trust in these systems, Tyler maintains strict boundaries regarding privacy. He does not share his emails or personal life details with AI. He worries about potential risks like data hacks or companies going bankrupt. He limits his interactions to scientific and intellectual topics. For example, he uses AI to help him read Homer's Odyssey rather than for sensitive job information.

I would not put detailed information about your personal life into the AIs. I have a pretty high degree of trust in those systems. But things can change. A company goes bankrupt or gets hacked. For the time being, I just wait.

Tyler often shares his AI interaction logs with others. When people ask him questions, he sometimes sends them a response generated by a model. He feels this is a modest approach because the AI often provides a better answer than he could. Russ suggests that Tyler should publish these logs publicly because people would find his daily interactions with AI fascinating.