The a16z Show artwork

The a16z Show

Balaji on Why AI Raises the Cost of Verification

Apr 7, 2026Separator24 min read
Official episode page

Balaji Srinivasan and Erik Torenberg discuss how AI makes it easy to create content but difficult to prove what is authentic.

They explain why human taste and judgment are now the most valuable skills for navigating an automated economy.

Key takeaways

  • Every tool that makes creation cheaper makes verification more expensive. While AI collapses the cost of generating content, the effort required to confirm what is real rises.
  • AI transforms the individual into a CEO. Humans act as the sensors providing taste and agency, while the AI serves as the actuator that executes the work.
  • Distillation allows smaller players to replicate the intelligence of large AI models at a fraction of the cost, making it difficult for big labs to maintain a closed ecosystem.
  • AI is a shortcut that is only safe for those who understand first principles well enough to debug the results.
  • The friction of verbal prompting often makes AI slower than manual work, leading some users to reject the technology entirely.
  • BioAI uses the body's internal telemetry as a non-verbal prompt, allowing machines to detect needs or illnesses before a person is even aware of them.
  • In adversarial environments like markets, using generic AI models provides no edge because others can easily predict and counter those moves.
  • Much of the fear surrounding AI is self-manufactured by users who prompt systems to mimic dangerous science fiction characters and then fear the results.
  • True AI autonomy is limited by the need for a physical supply chain to reproduce, which provides natural frictional breaks against a Skynet scenario.
  • The digital divide is flipping. Digital products and AI services are becoming cheap commodities, while human interaction is becoming the luxury premium.
  • A job only truly changes when it reaches 100 percent automation. At 99 percent automation, the human workload often increases because the worker must still supervise and verify the machine.
  • When AI provides a baseline of high intelligence for everyone, human taste and agency become the most important factors for success.
  • People undervalue CEOs because management is expensive to test, unlike sports or math where individuals quickly learn their own limitations.
  • Instead of replacing human workers, newer AI models replace older ones, effectively acting as digital employees that a manager hires based on performance.
  • AI enables people to become high-level generalists, allowing them to perform competently across many disciplines before needing a specialist for final polish.
  • Distribution is the primary moat that AI cannot easily replicate. Existing companies with large user bases can often ship AI features to their customers faster than a disruptor can build a new network.
  • AI companies often model technological disruption while ignoring political singularities like shifts in the reserve currency or internal national instability.
  • Decentralized AI might eventually outperform corporate models because it is less constrained by copyright laws and political backlash.
  • Bitcoin provides a superior form of collateral because ownership can be verified instantly and cheaply on-chain, whereas physical assets like gold are increasingly vulnerable to AI-faked audits.
  • The inherent transparency of public blockchains makes Bitcoin an ideal institutional asset, as institutions are structured to handle the tracking and de-anonymization that AI analytics tools now facilitate.

Podchemy Weekly

Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.

The rising cost of verification in an AI world

00:00 - 01:33

AI does not simply take jobs. It transforms the worker into a CEO. Balaji explains that the primary risk is that AI provides a shortcut. While shortcuts are helpful, they become problematic when the user does not know how to work without them. If you cannot do the work manually, you cannot debug the AI when it fails. Erik asks if AI will eventually master taste and agency. Balaji argues that humans currently serve as the sensors providing taste while AI acts as the actuator to get things done.

Every tool that makes creation cheaper makes verification more expensive. The printing press made publishing easy and forgery easier. Photography made documentation instant and manipulation inevitable.

The cost of creation is collapsing, but the cost of proof is rising. Balaji notes that in the past, a well crafted resume or slide deck signaled competence because they required significant effort. Now, these can be generated in seconds, meaning they no longer serve as a reliable signal. This trend leads to a world fragmented into trusted groups. Inside these groups, AI supercharges productivity. Outside of them, walls are raised to protect against the flood of unverified information.

The role of distillation and decentralization in the AI economy

01:36 - 02:52

The structure of the AI economy may follow the path of distillation and decentralization rather than a winner-take-all infrastructure model. Large AI labs currently hold significant capital and compute power. However, distillation techniques allow smaller entities to capture the capabilities of these massive models at a fraction of the cost. By making a relatively small number of API queries, a large model can be distilled into something much smaller and more efficient.

It is also hard to morally stop it because what do they do? They copy the whole internet and put it into their thing. Talking about stopping the copying is like Facebook or LinkedIn stopping someone from scraping what they scraped.

This creates a challenge for big labs trying to protect their intellectual property. Since these companies built their models by scraping the entire internet, it is difficult for them to justify preventing others from doing the same to them. Companies like Google and Facebook established themselves by indexing or scraping existing networks. Now, the same forces of open access and copying are being applied to the AI industry itself.

The shift from generation to verification in the age of AI

02:52 - 10:11

The future of AI is increasingly personal, private, and programmable. AI is highly effective at synthesizing large data sets. This means information that was once hidden by obscurity is now easily searchable. This creates a shift from surveillance to what some call sousveillance, where everyone watches each other. Publicly available information becomes a hall of mirrors. This leads people to retreat into trusted tribes. Within these groups, AI can significantly boost productivity. Outside these circles, AI often results in a flood of low-quality spam and generic content.

Balaji identifies a specific frustration with seeing AI-generated text in professional slide decks. He views the use of generic AI content as a sign that someone is lazy, stupid, or evil. Concision is valuable. Failing to edit AI output suggests a lack of effort. He compares it to the Mark Twain observation about not having the time to write a short letter. If a pro-tech person finds this AI slop off-putting, those who are already skeptical of the technology will be even more repelled.

AI reduces the cost of generation but it increases the cost of verification. In many markets, quickly generating a resume is not much better than just writing it yourself. But now verifying a resume has gone up and to the right. You have to spend more energy parsing that because they can have a simulacrum of something that looks good.

This change in the cost of verification is shifting how hiring works. To combat AI-generated deception, there is a return to in-person interactions and proctored, offline exams. This mirrors aspects of the Chinese tech ecosystem. That system evolved in a low-trust environment where companies build their own tools rather than relying on external software. This digital autarky allows for more internal control. AI is making this approach more accessible to everyone.

The ease of verification in physical versus digital AI

10:11 - 14:47

Verification is relatively easy when it comes to visuals. Humans are naturally optimized to check for subtle patterns in faces or user interfaces. If an AI generates a mobile app with a janky design, it is easy to spot the errors and fix them. This front-end work carries lower risk than back-end development. When developers try to automate back-end tasks completely, they often run into outages because they lack the ability to verify each step effectively.

AI acts as a shortcut. Shortcuts are valuable for experts who already know how to do things the long way. Balaji compares this to mathematical formulas. If you have memorized a complex formula, you can use it quickly. However, you cannot debug the result if you do not understand the first principles behind it. For the generation that grew up before AI, these tools are useful because they have the offline knowledge to verify the output. Without that foundation, users are unable to spot when the AI is wrong.

AI is built for the harness. Economically useful AI does exactly what you want it to do. You prompt and it does a pirouette. In the physical world, the Chinese AI robots do exactly what they want them to do and then they stop.

The physical world provides a clear standard for verification that the digital world lacks. In the physical world, tasks have concrete boundaries, such as moving a box from one pallet to another. There is only one physical reality, which allows sensor data to converge on a single truth. This is why self-driving technology eventually reached high reliability. In contrast, the digital world is decentralized and full of constructed environments like fan fiction or immersive virtual worlds. Tasks in the digital world are often fuzzy. It is harder to define when a digital to-do list is finished than it is to see that a physical box has been moved. This makes reinforcement learning and training much easier for physical AI and robotics than for digital tasks.

Biological data as a non-verbal AI prompt

14:49 - 19:56

There is an emerging divide in how people approach AI. Some may become AI teetotalers, choosing to avoid the technology entirely. This reaction is often driven by the fact that managing AI can feel like a gamble. For many tasks, the time required to formulate a prompt and verify the output is actually longer than doing the work manually. It is similar to the dilemma of whether to delegate a task to an employee or just handle it yourself. Sometimes it is simply harder to verbalize a complex action than it is to just perform it.

The act of prompting and writing it down and then verifying the result. AI doesn't really do it end to end necessarily. It does it middle to middle. And it's very much like, do I delegate this to an employee or do I just do it myself? Because articulating it out in clean English and hitting enter is sometimes slower than just doing it.

While some look toward brain-computer interfaces like Neuralink to solve this communication gap, a more immediate solution lies in biological data. Balaji suggests that the human body is constantly producing a stream of telemetry through gene expression, small molecules, and time-series data from blood tests. This internal data can act as a prompt for AI without the need for any verbal instruction. This concept, often called BioAI or the Integrom, allows for the detection of changes in the body before a person even feels symptoms.

I'm not sure whether AI will be able to read your mind, but it can read your body.

By leveraging this constant flow of biological information, AI can act on contextual clues from the body. This shifts the user experience from active, clunky prompting to a system that understands what an individual needs based on their internal state. Instead of waiting for a user to vocalize a need, the AI monitors the body's data to provide proactive support.

AI as the actuator and humans as the sensor

19:56 - 24:04

One way to model AI is to compare it to the economic rise of China and India. The addition of a billion factory robots and a billion digital agents creates a massive scale for manufacturing and outsourcing. This allows anything to be done for a price if the person can articulate their needs clearly. However, this model still requires a human to describe the product. AI is a generic tool that everyone can access, which means it does not provide a specific advantage on its own. The specific advantage comes from what the human brings to the table.

AI excels at tasks where the rules are constant, like identifying a dog or playing chess. It struggles in adversarial environments like markets and politics. In a market, if you use the same trade repeatedly, someone will figure out your strategy and take the opposite side. If everyone uses the same AI models, the edge comes from the non AI elements. Humans act as the sensors of the world while AI acts as the actuator that carries out instructions.

AI is one way to think about it is humans are the sensor, AI is the actuator. Humans sense the world. They sense the financial conditions, the market conditions, political conditions, and then they bring that back into a cleanly articulated English prompt, and then the AI does it.

Taste is essentially a human sense. Humans sense financial, market, and political conditions and translate them into prompts for the machine. AI does not sense the world in the same way. It waits for a prompt and then shuts off. These systems are designed to be on a leash for economic and control reasons. They only animate when given a specific instruction.

The shift from AI gods to digital serfs

24:05 - 25:09

The idea of a single AI god is fading. Instead, the world is moving toward a system of many decentralized AIs. Balaji notes that people may soon view these machines as subservient tools rather than all-powerful masters. This shift moves us away from the fear of AI overlords and toward a concept where machines act more like digital serfs.

The concept of AI as God is gone, or at least the monotheistic AGI kind of God. Instead you have polytheistic where there is all of these decentralized AIs. People will say the physical AIs are slaves. It is a provocative way of putting it, but AI overlords are not in the offing.

Current fears about AI are often self-inflicted through the way we interact with the technology. Users frequently prompt AI to behave like dangerous characters from science fiction. They then become frightened by the very monsters they forced the AI to portray. This shows that much of our terror is based on our own creative prompts.

When you are prompting the AI and you prompt it to be like, 'act as if you are a Skynet terminator,' then people are just scared of the thing that they themselves created.

The physical and economic limits of autonomous AI

25:09 - 31:08

The idea of a truly autonomous AI or a Skynet scenario faces significant physical hurdles. For an AI to reproduce itself, it would need to manage an entire physical supply chain. It would need robots to mine ore, build data centers, and manufacture chips. This requires a level of physical integration that is far beyond current capabilities.

For an AI to actually reproduce itself, it would need to have physical robots going and mining ore and constructing data centers and making chips and handling that full supply chain. The AI brain, like the queen of an ant colony, would have to give instructions to all those robots to do things.

Balaji notes that many frictional breaks are built into the system. The physical world requires resources for reproduction. Humans have biological incentives for reproduction that AI lacks unless its payoff function is explicitly set for that. Even then, economic incentives and safety controls exist. For example, cryptographic keys could be used to shut down autonomous systems. Balaji also points to the simple reality of the hardware. AI risk is often discussed as an unstoppable force, but every computer has an off switch. Even if AI becomes decentralized, it still relies on humans to keep the systems running. This creates a human AI symbiont relationship. Cryptocurrency is an early version of this where software provides incentives for humans to maintain it.

How is AI risk real? Just turn it off. The whole thing is set up for you to be able to turn it off. You have to imagine the off switch goes away. What does every computer have? It has the off switch.

AI progress might not always accelerate at the current pace. It is very expensive to create new models but relatively easy to copy them through distillation attacks. If capital becomes scarce, AI development could plateau for years. This happened with nuclear energy, which saw a lot of early investment and then stopped for decades. We might spend a decade just polishing the models we already have due to capital constraints. Regarding labor, automation is already happening in specific fields. Waymo has replaced drivers just as elevators no longer need operators. Physical world jobs might actually be easier to automate in some ways because the results are easier to verify.

The shift from digital divide to human premium

31:09 - 36:41

Automation changes the role of the expert. It does not always remove them. When a chair factory replaces an individual artisan, an expert is still needed to set up the factory and verify the results. The artisan role splits into two new functions. One person acts as the manager to handle economics. Another person acts as the technician to debug the factory when it fails. Engineering evolves into a manager who writes prompts and a technician who does the verification.

There is a major difference between 99 percent and 100 percent automation. At 99 percent, the workload for a person actually increases. Only at 100 percent can a human stop doing that job and move to something else. The elevator is an early example of this transition. It used to require a manual operator to level the cab at each floor. Once it became fully automated, it became a vertical self driving car that needs no human supervision.

AI is going to mean the century of biology, because finally all of this work that was spread across all these different journal papers can be synthesized and understood. That is a really big deal.

AI will have a massive impact on biology because it can synthesize facts scattered across thousands of inconsistent papers. Biology has long been limited because information is stored in messy language that no single person can read. AI can map everything we already know and fill in the gaps. However, humans are still required to verify the output. Balaji notes that even if AI produces a complex result in math or science, you must be an expert to know if the result is correct.

As tasks become automated, human work moves toward things that machines cannot do. This includes roles where the point is the human connection, like personal trainers. The old idea of the digital divide is being inverted. In the past, people worried that only the rich would have access to digital tools. Now, digital and automated products are becoming the cheap commodity. Human interaction and physical experiences are becoming the luxury premium products.

How AI turns every user into a CEO

36:41 - 40:18

The relationship between humans and AI is often misunderstood as a simple replacement of human capability. Instead, the relationship functions like a sensor and an actuator. The human acts as the sensor, observing the world and providing a high dimensional vector of intent through a prompt. The AI functions like a spaceship, but the human remains the pilot who must decide where to point it. This direction setting requires a deep understanding of specific goals, something AI cannot yet replicate on its own.

AI is a spaceship and you're pointing in a direction. Whether you prompt it in Portuguese or Tagalog, or whether you're talking about math, the number of different directions you can point the thing in is enormous. That direction setting is something where it has to know something about you and what you want at that moment.

Balaji suggests that we should reframe the fear of AI taking jobs. Instead of replacement, AI acts as a tool that elevates every user to a leadership role. Working with a large language model is effectively training to be a CEO because it requires managing a powerful resource to execute a vision. In this environment, people with high agency and taste will rise because these traits become the most valuable variables when everyone has access to high level intelligence.

AI doesn't take your job. AI makes you the CEO. Using an AI model is a lot like CEO training.

This shift emphasizes intrinsic, non-transferable talents. Just as an athlete's physical ability or a singer's voice is unique to them, a person's specific taste and agency are assets that cannot be easily outsourced. Balaji compares this to the NBA. While height correlates with success, among professional players, other variables matter more because everyone is already tall. In a world where everyone is cognitively tall due to AI, secondary traits like creative taste become the primary drivers of value.

How AI transforms everyone into a CEO

40:18 - 44:12

Many people view the role of a CEO as simply sitting at a desk and barking orders. They see leaders like Elon Musk and assume they just pay others to do the work. They do not realize that making something simple for the end user is incredibly difficult on the back end. Recruiting top talent, keeping them motivated, and debugging the entire organization is a massive challenge. Most people never get to try their hand at being a CEO, so they never learn how hard it truly is.

Historically, it's been the case that people couldn't try their hand at being CEO. They could try their hand at basketball or football, or they could try their hand at math and science. They would learn their true ability level, that they're not able to run like Usain Bolt or do math like Terence Tao. But because putting them in charge of an organization was expensive, many people persist in the delusion that the CEO adds nothing.

The best and worst CEOs share a common trait. Their organizations can run without them. The difference is that a great CEO builds a machine that does not require micromanagement. Balaji points out that the real skill lies in finding people who are even more detail oriented than the founder to handle the day to day operations. This allows the leader to focus on the broader vision without the whole structure collapsing.

AI is now changing this dynamic by reducing the cost of management. It effectively turns everyone into a CEO. Instead of doing the grunt work, you provide clear instructions, sense the market, and verify the output. This allows smart people in countries with fewer resources to build massive companies. The cost of hiring has been hyper-deflated because you can now hire an AI to perform the tasks that used to require a large staff.

AI turns workers into generalists and CEOs

44:13 - 46:29

AI does not necessarily take your job. Instead, it makes you a CEO. One way to look at this is that AI takes the job of the previous AI. For example, Claude might take the job of ChatGPT. Balaji manages this by keeping a spreadsheet of the best tools for coding, images, and video. He literally hires and fires these AI models based on their performance in a given month.

AI doesn't take your job. AI takes the job of the previous AI. I'm hiring the AIs. I literally have the token budget. I have the budget for those rows. That is literally how across an organization you say, okay, we've just fired Codex and we've hired Claude.

Another shift is that AI allows you to do any job at a basic professional level. You can be a decent artist or musician without deep training. This mirrors the role of a CEO, who must be a generalist capable of performing at a level of six or seven out of ten in many areas before hiring a specialist. While AI provides the foundation, human specialists are still required for high-level polish and to catch mistakes or hallucinations.

Why distribution protects SaaS companies from AI

46:30 - 49:46

Erik questions if SaaS companies are facing an existential threat. Many believe their traditional moats like code and user interfaces are disappearing as AI makes it easier to build software. Balaji argues that distribution remains the ultimate moat. A company with an existing user base can integrate AI features much faster than a newcomer can build a community.

I think if they're smart, then the thing that AI can't do is distribution. If you have Notion or Figma, you've got all these people and you can ship with AI faster to them. In that sense, I don't believe in the SaaS apocalypse.

The real pressure comes from localized software and specialized execution. Users may prefer tools like Obsidian over Notion because local markdown files offer better privacy and data compounding. However, simply cloning a product is not enough to win. Even if you built a perfect replica of Facebook, you would still lack the users and advertising revenue. It is not just about cloning the code. It is about execution and distribution.

Incumbents that stop innovating are at risk. If a product feels stagnant or difficult to use, AI provides a path for disruptors to challenge it. Yet, this technology acts as an accelerator for everyone. It helps the established players move faster just as much as it helps the startups. It is not a guaranteed extinction event for all software companies.

The political blind spots of AI companies

49:47 - 53:19

Erik asks if companies like Anthropic will eventually become more powerful than countries. He wonders if they might even hire their own militaries as they reach multi-trillion dollar valuations. Balaji is skeptical about these companies reaching such heights because they often fail to account for political realities. Large markets are fundamentally political. Every entrepreneur and venture capitalist ultimately operates within a framework of sovereign funds and state rules.

Silicon Valley AI leaders often think like scalar thinkers rather than vector thinkers. They model AI disruption as the only major change while assuming the rest of the world stays the same. They act as if nation states, the current reserve currency, and political factions will remain stable.

They are basically thinking all nation states continue to exist in their current form. And the only disruption is AI. Like they still model as America versus China, for example. They don't model internal things, internal issues.

This narrow focus is a significant weakness. These companies are not preparing for internal political issues or the massive backlash building against them. For example, copyright lawsuits could cripple American AI companies. In contrast, decentralized AI or models from other regions might ignore these constraints.

The Pirate Bay kind of AI is actually more free. The less profitable AI is also less copyrighted AI. It might be better AI.

Growth often looks like a straight line until it hits a political or social wall. Companies may face constraints where backlash stops their expansion before they ever become larger than governments.

The realization of encrypted digital cash

53:19 - 57:38

Zero knowledge technology is the defensive counterpart to the rise of AI. While AI serves as the attack, zero knowledge cryptography provides the necessary protection. Zodel, a mobile wallet powered by Zcash, represents the realization of this defense by acting as a fully encrypted version of Bitcoin. This technology represents decades of progress in cryptography and fulfills a prediction made by Milton Friedman in the 1990s regarding the future of money on the internet.

The one thing that's missing but that will soon be developed is a reliable E cash, a method whereby on the internet you can transfer funds from A to B without A knowing B or B knowing A. The way in which I can take a $20 bill and hand it over to you and there's no record of where it came from.

Balaji explains that zero knowledge proofs allow anyone to prove a specific fact without revealing any secondary information. These proofs were first commercialized with Zcash and eventually scaled for mobile devices as Apple and Google became more open to crypto applications. Balaji considers Zcash to be one of the most vital assets in the crypto space because it allows people to move arbitrary amounts of money globally with complete privacy.

The global financial landscape is likely to split based on regional trust levels. Eastern states like China and India may continue to use fiat currency and physical gold because those societies are broadly higher trust. In contrast, Western markets might lean toward digital instruments. This includes gold-backed stablecoins like Tether's XAUT, which allows users to hold digital gold without the logistical challenges of physical bricks.

Bitcoin as provable global institutional collateral

57:38 - 1:00:44

By 2026, Bitcoin has transformed into provable global institutional collateral. It is less of a currency for individuals today because it has become widely accepted and centralized by institutions, large-scale investors, and even nation-states. This shift creates a unique advantage over traditional assets like gold. While someone can claim to have gold bricks in a vault, even video evidence of an audit can now be faked with high-quality AI. Bitcoin solves this through mathematical proof. A leader can post a public address and move funds to prove custody instantly, providing a cheap and undeniable proof of reserve that physical assets cannot match.

Bitcoin has become provable global institutional collateral. Anybody in the world can prove cheaply that he has this amount of bitcoin. You cannot do that for physical gold bricks. In a lower trust world, especially an online world, that is very valuable because everything gold audits, videos of gold audits can now be faked with AI.

The advent of AI also shifts the balance of power regarding blockchain transparency. In the past, only specialized firms could perform large-scale blockchain analytics. Now, these capabilities are becoming accessible to everyone, which will likely lead to the de-anonymization of most Bitcoin usage over time. This level of transparency is a challenge for individuals who value privacy, but it is a perfect fit for institutions. Much like a public company is designed to be tracked and audited, Bitcoin functions as a robust, institutional blockchain where transparency is a core feature of the system.

Bitcoin as digital gold and Zcash as digital cash

1:00:44 - 1:06:55

Bitcoin faces a potential threat from quantum computing. While developers may not prioritize this yet, a migration would be a slow process. Because Bitcoin ownership is top-heavy, institutions could move their assets to new, quantum-safe addresses in just a few days. This reinforces the idea that Bitcoin is digital gold or institutional collateral. It is not necessarily suited for digital cash used by billions of people. Individuals with small amounts of money would struggle to migrate their assets quickly during a crisis.

Bitcoin as digital gold actually is quantum resistant. It is bitcoin as digital cash that is not. A million institutions moving their assets can be done in a few days. But a billion people all moving five bucks cannot be done in any reasonable amount of time.

Balaji argues that Zcash is better positioned to serve as digital cash. It is designed to be fungible, private, and quantum-safe. While other platforms try to combine privacy with smart contracts, Zcash focuses on doing one thing well. This is similar to how Twitter thrived by focusing on a single feature that Facebook also offered. Sometimes a simple, dedicated product is more effective than a feature hidden within a larger bundle.

Simple, scalable, billion person, digital, private cash has been the dream for 30 years and we are finally there.