Economist and former EU parliament member Luis Garicano explains the massive macroeconomic shifts AI will bring, from the future of work to global interest rates.
He shares his deep concern that the European Union's regulatory approach will prevent the continent from reaping these benefits when it needs them most.
Key takeaways
- The most significant productivity gains from AI will occur not from assisting humans, but when AI becomes fully autonomous in a task, creating a discrete, sudden jump rather than a gradual improvement.
- A massive AI productivity shock could paradoxically lead to a short-term recession. While welfare might increase due to free services, the sudden obsolescence of jobs and capital could cause GDP to fall until resources are reallocated.
- Traditionally, apprentices pay for on-the-job training by performing menial tasks. AI devalues this 'currency' by automating those tasks, making firms reluctant to hire and train new workers.
- The future of on-the-job training depends on a critical ratio: if AI enhances expert productivity more than it replaces apprentice labor, the incentive to train future experts will remain.
- To effectively correct an AI's work, you must be more skilled than the AI. This 'supervision threshold' poses a challenge for learners who may not be able to spot AI's errors, thus hindering their development.
- AI can be an 'enemy' to education when it allows students to bypass foundational concepts, but it becomes an 'obligatory' and powerful tool for advanced, project-based work.
- A potential solution for integrating AI is a hybrid model: enforce traditional, device-free learning for basic skills, while mandating AI tools for complex analyses that were previously only possible at a PhD level.
- The idea that we can simply direct AI development is complicated by the strategic competition between nations like the U.S. and China, which creates a 'runaway train' effect.
- AI risk isn't just about extreme scenarios; it's also about the gradual, subtle misalignment of many AI systems that could slowly push society off course.
- Europe's best strategy in AI is to be a 'smart second mover', focusing on the implementation layer while free-riding on the massive hardware and foundational model investments made by the US and China.
- To prevent US tech giants from capturing all the value, Europe must use regulation to enforce data portability and interoperability, avoiding the same lock-in that created monopolies in social media.
- Europe has a poor error-correction mechanism for legislation, as the same institutions that pass laws are tasked with undoing them, making it politically difficult to reverse course.
- AI is expected to increase both economic growth and interest rates. A major risk for Europe is that it will face higher global interest rates without capturing corresponding growth if it resists AI adoption.
- For 40 years, high demand for assets and slow growth led to falling interest rates. AI could reverse this by boosting productivity and investment, which would increase asset supply and drive interest rates up.
- Plummeting fertility rates, even in developing nations, create a future labor shortage, especially in the care economy. AI can help fill these gaps and sustain economic growth.
- The biggest impact of AI may not be as capital, but as a tool to accelerate the production of ideas and drive R&D, which is crucial for growth in a world with a shrinking workforce.
Podchemy Weekly
Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.
AI's potential for short-term growth spurts and macroeconomic disruption
While long-term exponential growth from AI might be slowed by various bottlenecks, a shorter-term growth spurt of 5-10 years is plausible. Luis Garicano distinguishes between non-autonomous AI, which simply assists humans, and autonomous AI, which can operate independently. As long as AI requires human supervision, the human remains the bottleneck, leading to only small, qualitative improvements. However, the moment AI becomes autonomous in a field, it creates a discrete, significant jump in productivity.
This could lead to a scenario where progress seems slow for a long time, and then a sudden, major productivity boom occurs in a specific profession. For example, customer service chatbots are moving from assisting junior agents to handling many tasks autonomously. This allows for a layered system where AI manages routine tasks, while human experts handle exceptions, creating massive leverage and a potential growth spurt.
The short-term macroeconomic effects of such a shock are complex and potentially counterintuitive. Consider a thought experiment where services in one economic sector, like medicine or law, become free due to autonomous AI. This would create a huge increase in consumer surplus and overall welfare. However, it would also have disruptive effects. The labor and capital in that sector would become obsolete, requiring a difficult and slow reallocation to other parts of theeconomy. A neurosurgeon's specialized skills, for instance, don't easily transfer to a different field.
In the short run, this could lead to a recession, even amidst rising welfare. The part of GDP from the automated sector would drop to zero because its price is zero. Widespread job losses and uncertainty could cause a demand-side shock, as people reduce their spending. This creates a paradox where society benefits from free, high-quality services, but the economy experiences a downturn until resources are successfully reallocated.
AI may replace junior workers while creating superstars
A key question surrounding AI is whether it will be good or bad for less skilled workers within skilled professions. There appears to be conflicting evidence on this topic. Luis Garicano highlights this tension by joking about a "Brynjolfsson versus Brynjolfsson" scenario, referencing two papers by the Stanford economist.
One perspective suggests AI helps junior workers. A study on AI chatbot assistance for customer service agents found that it significantly increased the productivity of the most junior employees. They learned faster, provided higher quality service, and their skills improved even after the tool was removed. Similarly, a field experiment with software developers showed that AI tools led to large productivity gains, particularly for junior staff. These findings suggest AI acts as a complementary tool that enhances skills.
However, aggregate data presents a more concerning picture. One recent paper on "seniority-based technological change," which analyzed 62 million workers, found that AI-exposed occupations saw a significant drop in junior employment, primarily due to a reduction in hiring. Luis notes that tasks once performed by junior researchers, like deep research or creating PowerPoint slides, can now be automated. Another paper, nicknamed "canaries in the coal mine," found similar drops in employment for workers aged 22-25 in fields exposed to AI.
Luis suggests two ways to reconcile these contradictory findings. The first is the distinction between non-autonomous and autonomous AI. Initially, AI acts as a non-autonomous tool that complements human skills. But as it becomes more powerful and autonomous, it can replace the very workers it once assisted.
I'm a better customer support agent. Oops, I don't have a job because the AI has been helping me become better until the moment AI is sufficiently better that I am not needed anymore.
The second reconciling idea is that for top-tier professionals, AI creates a "superstar effect." A highly skilled AI developer's abilities are amplified, allowing them to reach a massive market and create enormous value, similar to how a star athlete like Messi can be watched by billions. This leads to a bifurcation in the labor market: AI substitutes for jobs at the lower end while complementing and amplifying the value of those at the very top. As AI's capabilities grow, the segment of the workforce benefiting from these superstar gains may become smaller.
AI devalues the currency of entry-level training
Entry-level workers face a dilemma. They traditionally learned valuable skills by performing routine tasks, but AI is increasingly taking over that work. This raises the question of how companies can justify hiring and training them. Luis Garicano frames this as an "AI Becker problem," referencing economist Gary Becker's theory of human capital.
The traditional model is like a master-apprentice relationship. The apprentice pays for training not with money, but by doing menial tasks like reviewing contracts or creating spreadsheets. Luis argues that AI devalues this currency. If AI can perform these basic tasks more efficiently, the apprentice has less to offer in exchange for the expert's knowledge.
The AI devalues the currency with which you pay and as a result makes the firm reluctant or the expert reluctant to get the worker in the first place.
This creates a reluctance to hire trainees. However, the viability of on-the-job training now depends on a key ratio: how much AI complements a fully trained expert versus how much it replaces an untrained apprentice. If AI significantly boosts the productivity of experts, the future value of a trained worker becomes so high that it remains worthwhile to train them, even if their initial contribution is small. The company can extract enough value from them later on to justify the investment.
If this ratio is unfavorable, the "training ladder" could disappear. This would create a major societal problem by preventing the transfer of tacit knowledge—the kind of expertise learned through experience, not from a manual. Without a pipeline of new experts, the economy could face a critical skills gap in the future, lacking the human capital needed to supervise AI and drive growth.
AI creates a supervision threshold that changes how we learn
Predicting the future of AI is difficult. While there are general patterns and scaling laws, we are in a period of massive technological change and navigating it with limited visibility. Luis Garicano notes that if AI is as revolutionary as expected, the future could hold big surprises.
A key concept in understanding AI's interaction with human skills is Moravec's paradox: tasks that are hard for humans can be easy for AI, and vice versa. This leads to situations where humans spend time fixing simple mistakes made by AI. This dynamic introduces what Luis calls a supervision threshold.
You need to be smarter than the AI in order to be able to correct the AI. Think of a kid who is now going to school and they can use ChatGPT to make the essay much better than them. They just do the essay and they hand it in. They can't see where the mistakes are or the things are actually not perfect. So they are never going to arrive to the supervision threshold.
This presents a significant challenge for skill acquisition. Luis shares an anecdote from a friend, a high school English teacher, who struggles to get students to read classics like Hamlet because they can use AI to generate answers without engaging with the material. In professional settings like law or consulting firms, the assumption is that seniority gives you the ability to add value beyond what AI can do. You operate above that supervision threshold.
This shift will force companies to rethink training. Junior employees may no longer learn by performing routine tasks, as those will be automated. Instead, training might involve a new approach.
I think companies will have to think of training in different ways. Maybe they have to think of, okay, we're going to train the workers by, maybe we hire less of them, but the ones we have, we train them by going over the AI output and reviewing it. So that there is actually a way that you're still improving but you're not going through all these routine tasks that at the end of the day don't have any value at all anymore.
The inherent difficulty of financing human capital
When firms invest in creating human capital, equity-type arrangements are difficult to implement. Luis Garicano points out the significant moral hazard problem involved. Once a company invests in a person, that individual still decides how much effort they put in. If the company reaps the financial upside, the individual may lack the motivation to work hard.
Market solutions for this have historically been hard to find. Debt financing, like loans for MBAs, is more common but also faces challenges. It's difficult to secure a loan against human capital itself. As Luis notes, you cannot use a person as collateral because slavery is forbidden. This is why many student loan programs, in both the US and UK, rely on government guarantees.
An interesting exception to the equity problem exists in professional sports. A football team can train a player and then receive a transfer fee when that player moves to another team, which functions like an equity arrangement. However, this is a very unusual context. Overall, equity in human capital is hard to execute, while debt is more promising but remains tricky due to moral hazard and the impossibility of repossession.
Navigating the dual role of AI in education
The role of AI in education presents a complex challenge. On one hand, AI-powered learning tools can be tailored to individual students. On the other, instructors observe that students use AI to get summaries of summaries, avoiding fundamental learning tasks like reading.
Luis Garicano explains his dual approach to AI through the structure of his two university classes. For his first-year microeconomics course, he believes students must internalize basic models, facts, and tools to think effectively. In this context, AI is an 'enemy' because it allows students to complete problem sets automatically without genuine understanding, which becomes apparent during exams. To counter this, he enforces traditional methods like using blue books for problem sets and exams. However, he also encourages students to use tools like Claude as a tutor to explain concepts they don't understand.
In contrast, his second-year class focuses on applying economic principles to real-world policy problems, such as a water shortage in Tegucigalpa. For this advanced work, AI use is obligatory. Students must use AI to analyze the political economy, develop narratives, and plan implementation. This enables them to tackle complex analyses and data modeling that, just a few years ago, would have been at a PhD level.
The way that training is going to work has to radically change in using the AI tools to learn and using the AI tools to get much further. But at some basic level, we need to be able to persuade the students, that's the difficulty, that the basics they need to learn.
The key is persuading students of the value of learning the basics, as learning to write is fundamental to learning to think. A potential solution for foundational learning is to implement dedicated, device-free homework time in a supervised setting, like a library. The future of education lies in a hybrid model that both protects foundational learning and leverages AI for advanced applications.
How Europe's AI regulations risk stifling innovation
The optimistic view that we can simply choose to develop AI that complements human labor might be flawed. As AI becomes more advanced, it may inherently become a substitute for human work, leaving us with a difficult choice: accept advanced AI with substitution, or reject it altogether. Luis Garicano suggests that slowing down this technological "runaway train" is not a realistic option due to the strategic competition between the U.S. and China. If one country pauses development, the other will surge ahead.
Furthermore, the idea that "we" can direct technology is complicated. The pronoun "we" obscures the reality of competing interests among nations, firms, workers, and industries. Luis points to Europe as an example of the unintended consequences of trying to control technology too tightly. He describes his experience in the European Parliament during the creation of the EU AI Act.
This letter actually came in the middle of the elaboration of the EU AI Act... there was a moment of panic in Europe. And the person who actually wrote the law from the commission has given an interview... that it was a bad moment for that letter because really Europe decided, okay, this is too risky. Let's put all these guardrails all over the place.
The result is a regulatory framework that could prevent Europe from realizing many of AI's potential productivity gains. The AI Act classifies applications into risk categories. "High-risk" uses, which include education and health, face immense regulatory hurdles. For example, an AI tool to create customized quizzes for students would be deemed high-risk. This subjects it to requirements like using error-free, unbiased training data—an almost impossible standard given that the internet is the primary training corpus. Startups must also maintain detailed records for ten years, prove accuracy, and register with dozens of EU authorities. These compliance costs, on top of existing regulations like GDPR, risk stifling innovation and leaving Europe without its own foundational AI models, despite having talented researchers and capital.
The EU AI Act targets the most powerful AI models
One concern with artificial intelligence is not just the extreme risk of a scenario like a system turning all humans into paperclips. A more gradual but still serious risk exists. If many AI systems are deployed throughout the economy without being optimized for what humans truly care about, they could slowly shift society off the rails.
In response to such concerns, regulations like the EU AI Act have emerged. This act specifically targets what it terms "systemic risk" from General Purpose AI (GPAI) systems. The focus is on the most powerful models, defined by requirements such as needing over 10 to the power of 25 training flops. This approach means the regulation primarily applies to the most capital-intensive companies, subjecting a specific group of well-funded actors to additional scrutiny.
AI models like GPT-4 already exceed the systemic risk threshold
A specific category exists for systemic risk in AI models, defined by a computational power threshold of around 10^24 or 10^25 flops. This level is based on the capabilities of models like GPT-4.
The previous generation of AI systems, including GPT-4 and Llama, already exceed this threshold. Because they fall into this high-risk category, these very large systems require additional scrutiny. They must undergo adversarial testing and provide proof of their safety to mitigate potential existential risks.
The EU's over-regulation threatens necessary AI-driven growth
Luis Garicano is desperately worried about the European Union's future, particularly in the context of AI. He fears that the EU is in a situation where significant benefits from AI may not be realized due to over-regulation. These potential gains include large increases in productivity and welfare for citizens. For example, AI could make driving safer, reduce car crashes, and provide accessible legal advice for everyday matters like contracts or negotiating with a landlord.
If these innovations are not allowed, productivity and economic growth will suffer. This is especially dangerous for Europe, which already faces a demographic problem and high levels of debt. Europe desperately needs growth to pay its bills. France is cited as an example, a country that is not growing and also has substantial explicit and implicit debt from its pension liabilities. The core fear is that the European Union has over-regulated itself and will not achieve the growth it critically needs.
The UK's potential to become the Silicon Valley of AI
Luis Garicano believes the UK has a very bright future, particularly in AI. He prefaces his analysis by stating he thinks Brexit was a bad idea for both the UK and Europe. Europe lost the UK's influence, which consistently pushed for a more free-market and open-minded approach. After leaving, the UK diverged from the EU and has adopted a very pro-AI posture, even while continuing to follow EU rules in other areas like environmental policy.
Luis is optimistic about the UK's potential, which is why he teaches at the London School of Economics. He acknowledges that the government isn't moving quickly enough to capitalize on its advantages. However, the country has all the necessary components for success in AI: capital, a nucleus of talent from Oxford and Cambridge, and major labs like DeepMind. The one element that might be missing is a strong risk-taking mentality.
I think the UK could be Silicon Valley. I don't see why that could be impossible. Maybe the risk taking mentality is the one that is missing. It's not quite there.
Europe's strategy in the global AI value chain
The AI value chain can be broken down into an infrastructure layer, a lab layer, and an implementation layer. The distribution of value across these layers will determine the economic prospects for different parts of the world. Luis Garicano argues that Europe's best opportunity lies in capturing value at the implementation layer, as it is unlikely to compete on the lower layers.
The hardware layer is dominated by the US and China. The learning curves are steep and barriers to entry are high, similar to Intel's decades-long dominance in PC hardware. A significant amount of value will be captured here. Similarly, in cloud computing, there are risks of high switching costs as providers add features to lock in customers. Europe is trying to mandate data portability and ensure data is encrypted on local servers, but the extraterritorial reach of the US Cloud Act remains a risk.
The foundation model layer, or the LLM layer, appears to be highly competitive. No single company maintains a long-term advantage, as users frequently switch between models like Gemini, Claude, and OpenAI based on the latest features. The rise of open-source models also fuels competition. The main threat to this competitiveness is the introduction of switching costs through memory, where a model's familiarity with a user makes it difficult to leave. Luis emphasizes the importance of data portability, drawing a parallel to social media.
In social media, there's no portability. The data of my graph and of everything about me belongs to Meta or to Twitter... Imagine a world where I send a message and everybody who likes me can follow me from any platform where it's completely interoperable. Market power would change radically.
Luis advocates for a "smart second mover" strategy for Europe. This involves free-riding on the massive investments in LLMs and data centers made elsewhere, particularly in the US. Instead of trying to imitate these efforts, Europe should use its resources to build a strong implementation layer while securing its autonomy through data encryption and local data centers. However, this strategy faces significant geopolitical hurdles. The US government strongly supports its tech giants, making it difficult for Europe to enforce a level playing field and interoperability. Europe's dependence on the US for defense, highlighted by the war in Ukraine, further complicates its ability to negotiate forcefully.
Can US pressure save Europe from its own regulatory inertia?
When considering economic security, a key issue is sovereign computing power. While Europe desires its own data centers to support AI and the economy, public investment is not the solution. The cost of building massive, gigawatt-plus data centers is enormous, with projections reaching a trillion dollars by 2030. Luis Garicano notes that private sector investment is already happening, with major data centers being built in Portugal and Spain by companies like Google and Amazon. While not truly 'sovereign' since they are US companies, their local presence provides some degree of control.
The scale of private R&D spending by individual tech giants dwarfs the budgets of entire European governments. Therefore, for Europe to build out its infrastructure, it must create a welcoming environment for the private sector. This means streamlining regulations and removing obstacles to investment. However, Europe has a tendency to throw up regulatory hurdles that hinder this progress.
This leads to a paradox where aggressive US government action might inadvertently save Europe from its own risk-averse tendencies. Luis describes Europe's pattern of reacting to geopolitical shocks with alarm, only to fall back into complacency.
We had a year of wake up calls. We said, wake up call number one, Trump gets elected. Wake up call number two, the sofa scene where Vance and Trump ambushed Zelensky. And all the time it's like, this is a wake up call for Europe. We cannot trust our old ally, the U.S. we need to act together. And then we go back to sleep. The wake up calls don't seem to be waking us up at all.
Undoing existing legislation is extremely difficult in Europe. The same institutions and political coalitions that championed rules like the AI Act are now tasked with reversing them. Luis explains that since these laws were seen as major victories, the architects are reluctant to admit they went too far. This institutional inertia makes error correction nearly impossible. For example, a measure to reduce excessive corporate paperwork was expected to pass but failed in Parliament because many were invested in the original legislation. While there's a hope that US pressure might force a change, Luis remains pessimistic about Europe's ability to save itself from its own regulatory gridlock.
Europe risks higher interest rates without AI-driven growth
A simple demand and supply framework can be used to understand AI's potential impact on interest rates. Over the last 40 years, demand for assets has significantly outpaced supply, driven by slow growth and demographic changes. This caused asset prices to rise and interest rates to fall, which was a great benefit for countries carrying a lot of debt.
AI is expected to reverse this trend. Higher productivity growth from AI will increase the supply of assets as companies raise equity for new investments. At the same time, demand for assets might fall as younger workers feel less need to save in a high-growth economy. This combination is predicted to lead to a drop in asset prices and an increase in interest rates (R).
The optimistic view is that while interest rates (R) will rise, economic growth (G) will rise even more. Since fiscal sustainability depends on the R minus G formula, this outcome would be manageable. However, Luis Garicano worries that Europe may face a different scenario. He explains that Europe could get the negative effects of higher interest rates without the positive effects of higher growth.
What I worry about Europe is that you are going to have the bad part of having to pay higher rates without having the good part of having higher growth rates.
This could happen if various sectors in Europe, like transportation or law, create bottlenecks that resist AI adoption. Without AI-driven growth, Europe would still have to pay the higher global interest rates, worsening its already significant debt sustainability problems. This serves as another wake-up call for the European Union to adopt a more aggressive stance toward progress.
However, this consensus view is not without question. A case can also be made that AI could lead to a decline in interest rates. This could be driven by precautionary saving, as people save more due to uncertainty about their future, or by rising inequality, as wealthier individuals save a larger portion of their income. While acknowledging these possibilities, the primary expectation is that accelerated growth from AI will lead to a higher return on capital and thus higher interest rates.
The distributional effects of AI on European wealth
While Europe is a continent of creditors with large net surpluses, the benefits of a rising return on capital (R) are not straightforward. On the surface, it seems Europe should get richer. As a continent of net savers, it should capture some of the gains from AI-driven productivity, even if that innovation happens elsewhere. European savings are often exported to American companies, which then employ European entrepreneurs.
However, Luis Garicano points out that the distributional impact is complex. The benefits depend heavily on how wealth is held across different countries. For example, the Netherlands has large pension funds with significant exposure to interest rates, so its citizens may benefit broadly. In contrast, countries like Spain and France rely on state-run, 'pay as you go' pension systems. In these nations, the vast majority of the population has almost no financial wealth. Their primary asset is housing, so homeowners may see gains. Ultimately, only the top 3-5% of the population with significant financial assets will see a direct benefit from the rising returns, creating complex distributional challenges.
AI as a solution to demographic decline
Declining demographics are a significant concern, not just in the developed world but globally. Luis Garicano highlights that total fertility rates are plummeting unexpectedly in developing countries like Colombia, Tunis, and Turkey. This is a strange reversal of the usual pattern where countries get rich first and then start aging. Now, they risk growing old before they get rich.
This demographic collapse creates serious problems, particularly for the care economy. As populations age, the need for care will be enormous. AI could play a crucial role here by performing tasks that are typically done by humans, helping to fill professions that will be difficult to staff. Luis recounts a conversation with Joshua Gantz, who suggested people might even prefer a robot for personal care needs over a human. This supports a more positive stance on AI, as it can provide the labor needed to sustain growth amidst demographic decline.
Of course it could be that AI leads people to want to have AI companions. And I don't know if that makes the fertility crisis worse. But okay, that's the consumption choice that, but we cannot predict how that will play out. But it does seem like people like to have AI friends.
Beyond the care economy, AI's most significant impact could be in research and development (R&D). Some economic models suggest population growth is necessary for economic growth, so falling fertility rates are problematic. AI can help by accelerating the production of ideas. According to work by economists like Chad Jones, AI's true power isn't just as another form of capital, but as a tool for R&D. If AI can speed up the creation of ideas, it can truly accelerate growth.
Luis is optimistic about this potential. He points to examples like mathematician Terry Tao, who uses AI to facilitate collaboration in larger teams by verifying parts of complex proofs. There are already tangible results from AI in fields like combinatorics and protein folding. Given the demographic challenges, AI's ability to boost research and innovation could be crucial.
