David Sacks, tech investor and policy advisor, outlines a potential Trump administration strategy for AI and cryptocurrency.
He explains why winning the global technology race depends on unleashing open-source innovation rather than allowing regulatory capture by large companies.
Key takeaways
- Silicon Valley's success is built on 'permissionless innovation,' which allows anyone to build new technology without government approval. This principle is now under threat in the AI industry.
- Unlike the crypto industry, which united to ask for clear rules, some established AI companies are actively promoting fear to push for regulatory capture, aiming to create a pre-approval system that would block smaller competitors.
- The term "woke AI" is insufficient; the real danger is "Orwellian AI" that is designed to lie, distort answers, and rewrite history to serve a contemporary political agenda.
- The primary risk of AI is not a 'Terminator' scenario of superintelligence, but a '1984' scenario where it's used by powerful entities for censorship, surveillance, and information control.
- The current state of AI is a 'Goldilocks scenario'—it's driving major productivity gains with specialized models, but it is not on the verge of uncontrollable AGI and still requires significant human prompting and validation.
- AI agents are most effective when given narrow, specific tasks. This suggests they will act as synergistic tools for humans rather than as complete replacements.
- Intelligence is not the same as life. AI models are mathematical systems that lack human qualities like free will and sentience, making direct comparisons to human cognition flawed.
- AI is not consolidating power into a few hands but is instead hyper-democratizing, spreading to hundreds of millions of users faster than any technology in history.
- The competitive landscape, with multiple AI models constantly leapfrogging each other, contradicts the theory of a single model achieving a runaway lead through recursive self-improvement.
- Open-source AI is synonymous with freedom, allowing users and enterprises to run their own models and maintain control over their data, much like running an on-premise data center.
- Ironically, China is currently leading in open-source AI models, which may be a deliberate strategy to catch up and commoditize the software layer to bolster its hardware manufacturing.
- A patchwork of 50 state-level regulations would cripple US innovation in AI. The country's single national market is a fundamental competitive advantage that must be preserved with a unified federal standard.
- There is a fundamental culture clash between Silicon Valley and Washington. Silicon Valley believes winning requires building the biggest ecosystem through widespread adoption, while Washington's 'command and control' approach seeks to hoard technology.
- By restricting technology sales to allies, US export controls are counterproductive. They push countries toward Chinese technology, effectively helping build a competing ecosystem instead of protecting America's lead.
- A major short-term energy solution for AI is to optimize the existing power grid. Shedding just 40 hours of peak load annually could free up 80 gigawatts of power, but this is currently blocked by regulations.
- AI doomerism is replacing climate doomerism as a 'central organizing catastrophe' used to justify economic regulation and control over the information space.
- Crypto legislation like the Clarity Act is crucial for providing long-term certainty, as founders need stable rules that extend beyond the tenure of any single SEC chairman.
Podchemy Weekly
Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.
Trump's plan for winning the AI and crypto race
David Sacks explains that AI and crypto are grouped in his portfolio because they are new, often feared, and not well understood. The policy approaches for them, however, are different. For crypto, the primary need is regulatory certainty. Entrepreneurs want clear rules to follow, but the Biden administration's approach under the SEC has been described as "regulation through enforcement."
They don't tell you what the rules are. You just basically get indicted, and then everyone else is supposed to divine what the rules are as you get prosecuted and fined and imprisoned.
This uncertainty has been pushing the crypto industry offshore. In contrast, President Trump has declared he would make the United States the "crypto capital of the planet" by providing regulatory clarity. This approach is seen as pro-regulation in the sense that it establishes clear rules, which in turn protects consumers and enhances America's competitiveness.
For AI, the situation is almost the opposite. David argues the Biden administration has been too heavy-handed, attempting to regulate the technology based on fear without fully understanding it. The Trump administration's policy, however, is focused on ensuring the United States wins the global AI race, particularly against China. The philosophy is that since the private sector drives innovation in the U.S., the government should not impose burdensome regulations that hinder progress. Trump's policy is built on pillars of being pro-innovation, pro-infrastructure, pro-energy, and pro-export. David sees his role as a bridge between Silicon Valley and Washington, helping policymakers understand the tech industry's unique culture to avoid stifling innovation with excessive government intervention.
The threat of regulatory capture in AI
Marc notes that after the last election, many people in politics and finance have come to realize how severe the anti-crypto environment was. They previously thought tech founders were just whining, but now see the horror stories of debanking and FBI raids were real. David agrees, highlighting that it was an extreme form of censorship. Not only were crypto companies debanked, but their founders were personally unable to open bank accounts, depriving them of a livelihood.
David recalls a White House crypto summit where an attendee remarked, "A year ago, I would have thought it was more likely that I'd be in jail than that I'd be at the White House." This illustrates the dramatic shift in attitude. David believes the previous administration deliberately withheld clear rules to drive the crypto industry offshore.
Marc contrasts this with the AI industry. While the crypto community was united in wanting clear rules, some major AI companies are actively seeking regulatory capture to block new competitors. David calls this a huge problem, pointing to his recent criticism of the AI company Anthropic. He recounts how a co-founder, Jack Clark, used a fear-mongering analogy about AI being like monsters in the dark. After Anthropic denied engaging in a regulatory capture strategy, a lawyer who attended Clark's speech revealed that Clark admitted their true goal was to establish a pre-approval system in Washington for new AI models and that making people afraid was part of that strategy.
David explains that this approach is damaging because it undermines the principle of "permissionless innovation," which has been the bedrock of Silicon Valley's success. Unlike heavily regulated fields like pharma or banking, tech allows a couple of founders in a garage to pursue an idea without government permission. Requiring pre-approval for AI models would destroy this.
The whole basis of Silicon Valley success, the reason why it's really the crown jewel of the American economy and the envy of the rest of the world... is because of permissionless innovation. And what is being contemplated and discussed and implemented with respect to AI is an approval system for both software and hardware.
This isn't just theoretical. The previous administration implemented a rule requiring licenses for GPU sales, which has since been rescinded. The current push for AI software regulation aims to create a system where Washington must approve new models before release. David warns this would slow innovation, make America less competitive, and potentially cede the AI race to China. Marc adds that the companies pushing this narrative are hypocritical, as they are buying GPUs faster than anyone and have notoriously poor security practices, suggesting they don't actually believe their own fear-mongering.
The path to Orwellian AI through regulation
Some AI companies use a narrative of being the sole virtuous guardians of a potentially world-destroying superintelligence as a potent recruiting tool. This approach often aligns with aggressive pushes for regulatory capture. Currently, there are around 1200 bills concerning AI regulation moving through state legislatures, with over 100 measures already passed.
A particularly troubling trend is the concept of "algorithmic discrimination," which has been implemented in states like Colorado, Illinois, and California. This rule states that if an AI model produces an output with a disparate impact on a protected group, it constitutes discrimination. The list of protected groups is extensive; for instance, Colorado includes people with limited English proficiency. This shifts liability from the business using the tool to the tool's developer.
This creates a difficult situation for model developers. It's nearly impossible for them to anticipate every potential use of their tool or to know if a factually accurate output might be part of a decision that has a disparate impact. The only practical way to comply is to build a DEI (Diversity, Equity, and Inclusion) layer into the models. This layer would have to sanitize or distort answers to prevent any potential disparate impact, leading to what some call "woke AI."
However, the term "woke AI" may not fully capture the severity of the issue. A better term might be "Orwellian AI."
What we're really talking about is Orwellian AI. We're talking about AI that lies to you, that distorts an answer that rewrites history in real time to serve a current political agenda of the people who are in power. I mean, it's very Orwellian.
This was demonstrated by incidents like an AI model generating images of a Black George Washington, where history was rewritten in real time due to a DEI layer. This aligns with goals in the Biden administration's executive order on AI, which heavily emphasized promoting DEI values in models.
The real risk of AI is 1984, not The Terminator
The biggest risk of AI isn't what James Cameron described in The Terminator, but rather what George Orwell described in 1984. David Sacks argues that as AI becomes the primary way we get information, it will be used by those in power to control what we receive. He fears the "trust and safety" apparatus built for social media will be ported over to AI, leading to censorship and ideological bias. Coupled with AI's potential as a personal assistant that knows everything about you, it becomes a perfect tool for government surveillance and control.
In my view, it's not the Terminator. It's 1984 that, as AI eats the Internet and becomes the main way that we interact and get our information online, that it'll be used by the people in power to control the information we receive.
There seems to be a pullback in Silicon Valley from the narrative of imminent Artificial General Intelligence (AGI). Citing Andrej Karpathy, David notes that AGI is likely at least a decade away, partly because AI's reinforcement learning is different from how humans learn and reason. This suggests humans and AI will be more synergistic than competitive.
Instead of extreme scenarios like a Terminator-style apocalypse or an overhyped bubble, we are in a "Goldilocks scenario." AI is driving impressive innovation and productivity gains without becoming an uncontrollable superintelligence. This is supported by two key observations from Balaji Srinivasan. First, AI is "polytheistic, not monotheistic," meaning we are seeing many specialized models rather than one all-powerful AI. Second, AI is "middle to middle" while humans are "end to end." AI needs a specific prompt and objective from a human, and its output needs to be validated and iterated upon. It doesn't generate its own goals, which reinforces the idea of a synergistic relationship.
AI agents thrive on narrow tasks, not broad objectives
AI agents work best when given a narrow context and specific tasks. If you give an agent a very broad objective, like "sell my product," it is unlikely to figure out how to do it without human intervention. However, if a sales representative uses AI for a very specific task, it is much more likely to succeed.
This suggests that for the foreseeable future, AI will function as a synergistic tool that boosts human productivity rather than eliminating jobs. The need for human cognition is not going away. This holds true for at least the next five to ten years, though predicting beyond that is difficult.
This view is consistent with current observations. Early agents would often go "completely bananas" on long-running tasks, but they are improving. Context remains key. For example, among the many AI video models available, no single one is the best at everything. Different models excel at specific outputs, whether it's a meme, a movie, or an ad.
Mark Zuckerberg said something that I really liked. He's like, intelligence is not life. And these things that we associate with life, like, we have an objective, we have free will, we're sentient. Those just aren't part of a mathematical model.
The comparison between AI and humans often falls short. AI models are mathematical systems that search through distributions and can improve their logic, but they lack the qualities we associate with life. They are fundamentally different from humans, even though they are already better than us at many specific things.
The hyper-democratization of AI technology
The future of AI presents two possible paths. One path leads to a world where a few companies, governments, or super AIs control everything, consolidating all value. The other path leads to a decentralized future where the technology is a tool for individual empowerment, creativity, and expression. The current trend seems to be following the second path, as AI is becoming hyper-democratized. It has spread to more individuals around the world faster than any technology in history, with user numbers rapidly approaching a billion.
The most advanced AIs are available directly to consumers. Marc notes there isn't a better version of AI you can buy access to; it's already in products like ChatGPT and Grok. This puts powerful tools in everyone's hands. He shared a personal example of his wife using it to design a full entrepreneurship curriculum for their 10-year-old son to start a video game company. A task that once would have required a specialist can now be done in a couple of hours.
David Sacks agrees that this decentralization is crucial to avoid an Orwellian future. The market is currently hyper-competitive, with several major companies constantly leapfrogging each other in model performance. This reality contradicts the narrative of an imminent AGI, which predicted one model would gain a recursive self-improvement lead and race towards a singularity. So far, no single model has completely pulled away from the others.
The concept of a "virtual AI researcher" is one version of this AGI narrative, suggesting that models will become smart enough to create AI researchers, leading to a singularity. David expresses skepticism about this idea, questioning what a "virtual AI researcher" truly entails. He points out that AI is still not an end-to-end system; it cannot set its own objectives or pivot in the ways a human researcher can. He suspects that creating a true virtual AI researcher might require AGI in the first place, making the argument circular. Such claims often seem more like recruiting tools than genuine predictions.
The strategic importance of open-source AI
David explains that open-source AI is synonymous with freedom. It allows users to run their own models on their own hardware and retain control over their information. This is similar to how about half of the global data center market is on-prem, with enterprises and governments choosing to run their own infrastructure rather than using big cloud providers. He believes consumers will also want this level of control.
The current irony is that the best open-source AI models are Chinese, which is the opposite of what one might expect. David suggests this could be a historical accident or a deliberate strategy. As a strategy, open source is an effective way for a competitor to catch up because it attracts non-aligned developers. Additionally, if a country's business model is hardware manufacturing, it makes strategic sense to commoditize the software layer, which is its complement.
David argues the U.S. should encourage its own open-source initiatives to compete. He worries that the market for closed models, while currently competitive, could consolidate into a monopoly or duopoly, as seen in other tech sectors like search. Open source serves as a crucial alternative, ensuring that users have an option that's more fully within their control. This is especially important given what the Twitter files revealed about government collaboration with social media companies for censorship. Open source provides a safeguard to prevent a similar scenario from unfolding with AI.
While the U.S. leads in top-tier closed models compared to China, David notes that open source is the one area where the U.S. appears to be behind. Ben adds that there is currently an explosion of new model development, including many new open-source efforts, which suggests consolidation is not an immediate concern.
The key pillars for winning the AI race
When considering the AI race against China, winning has more to do with decisions made about the US technology ecosystem than with actions taken against adversaries. David Sacks outlines two key pillars for ensuring the US remains in the lead.
The first pillar is innovation, which is driven by the private sector. The biggest obstacle to this innovation is the current frenzy of overregulation, especially at the state level. A patchwork of 50 different regulatory systems would be incredibly burdensome and a trap for startups. A single federal standard is needed to preserve one of America's greatest advantages: its large, unified national market. This advantage is what allows American companies to scale and then win globally.
In Europe before the EU, it wasn't competitive at all on the Internet because of 30 different regulatory regimes. If you're a European startup, and even if you won your country, it didn't get you very far. Meanwhile, your American competitors won the entire American market and is ready to scale up globally. So the fact that we have a single national market is just fundamental to our competitiveness.
The second pillar is infrastructure and energy. The current AI boom requires a massive infrastructure buildout, and the biggest limiting factor is energy. To win the AI race, unnecessary regulations, permitting restrictions, and local opposition (NIMBYism) must be cleared so that AI companies can build data centers and get the power they need to operate.
US export controls are unintentionally strengthening China's tech ecosystem
There is a fundamental cultural divide between Silicon Valley and Washington on how to win a technology race, especially concerning exports. In Silicon Valley, the prevailing belief is that victory comes from building the largest ecosystem. This involves getting the most developers and users on a platform, a concept David Sacks equates with diffusion or usage. The goal is to partner and publish APIs to encourage widespread adoption.
Washington, in contrast, operates with a 'command and control' mentality. It seeks to hoard technology, believing only America should possess it. This philosophy is evident in the Biden administration's diffusion rule, which treats the spread of technology as a negative outcome. This creates a direct culture clash, as what Washington calls a 'bad word'—diffusion—is precisely how Silicon Valley understands winning.
Every country we exclude from our technology alliance, we're basically driving into the arms of China, and it makes their ecosystem bigger. ... All of these rules basically just create pent up demand for Chinese chips and models and it creates a Huawei belt and road.
While selling technology to a competitor like China is a nuanced and complicated issue, selling to the rest of the world should be straightforward. The US should aim to build the largest possible technology alliance. However, current policies are doing the opposite. By preventing allies like Saudi Arabia and the UAE from buying US chips to build AI data centers, the US is excluding them from the American tech stack. This leaves them no choice but to turn to China, which is actively promoting its own models and chips. This strategy is counterproductive, effectively creating a 'Huawei belt and road' and strengthening China's ecosystem. The irony is that the architects of these policies call themselves 'China hawks', yet their actions directly help China by handing them new markets.
Addressing the energy infrastructure bottleneck for AI
Powering the AI revolution presents a significant energy infrastructure challenge. While the Trump administration has taken steps to ease permitting for nuclear power and data centers, progress is hampered by local and state-level "NIMBY" (Not In My Back Yard) issues. Nuclear energy is a long-term solution, likely five to ten years away. In the short term, natural gas is the most viable option for powering new data centers.
However, there's a critical bottleneck: a shortage of gas turbines, with a two to three-year backlog from the few companies that manufacture them. A more immediate solution lies in optimizing the existing electrical grid. Energy executives have suggested that shedding just 40 hours of peak load per year to backup generators could free up an additional 80 gigawatts of power. This is possible because the grid typically operates at only about 50% capacity, holding the rest in reserve for peak demand days, like the hottest day of summer. Unfortunately, regulations preventing this type of load shedding, such as restrictions on diesel generators, stand in the way.
This American approach, focused on enabling innovation, stands in stark contrast to the European Union's strategy. Ben notes that a guiding principle seems to be doing the opposite of the EU. David explains that for Europe, AI leadership means leading in regulation, not innovation. This mindset is exemplified by a Reagan-era observation about government intervention.
If it moves, tax it. If it keeps moving, regulate it. If it stops moving, subsidize it.
David argues that America's path to winning the AI race is not through regulation but by ensuring its companies, the drivers of innovation, are successful. This focus on core American values of innovation and competition is seen as essential for both economic and national security.
AI doomerism is the new political catastrophe narrative
David Sacks suggests that AI doomerism is replacing climate doomerism as a primary political narrative. He argues that the left requires a "central organizing catastrophe" to justify taking over the economy, regulating everything, and controlling the information space. As the climate change narrative fades, partly because its catastrophic predictions have not come true, AI doomerism offers a compelling substitute.
This new narrative shares similarities with the climate one. It leverages pre-existing fears from pop culture, like "The Terminator" and "The Matrix," and is supported by a "patina of pseudoscience" that makes it difficult for the average person to question. The ultimate goal, David claims, is control. Since AI will touch every part of the economy, regulating it provides extensive control. Furthermore, as AI becomes the primary way people get information, controlling it aligns with agendas around censorship and influencing public thought.
This ideology was heavily promoted by the effective altruism movement, which, after the FTX scandal, pivoted to focusing on existential risk from AI, or "X risk." This perspective heavily influenced the Biden administration. Their strategy was to consolidate AI control into two or three American companies, restrict open-source development, and manage the technology to prevent the "genie from escaping the bottle."
Marc: They told us that explicitly... They told us they're going to ban open source. And when we challenged them on the ability to ban open source, because we're talking about mathematical algorithms that are taught in textbooks and YouTube videos and universities, they said, 'Well, during the Cold War, we banned entire areas of physics and put them off limits, and we'll do the same thing for math if we have to.'
David notes that many of the Biden administration's top AI officials went to work at Anthropic, suggesting a close relationship during their time in government. The administration's view was that AI is like nuclear weapons and GPUs are like uranium, requiring centralized, international-style control.
However, this narrative is beginning to collapse. It was based on the flawed assumption that the US was so far ahead of China in AI that it could afford to over-regulate without losing its competitive edge. This was refuted by China's progress, exemplified by the launch of the DeepSeek model. Additionally, Huawei's Cloud Matrix technology demonstrated that even with individually inferior chips, China could network them to compete with Nvidia at the system level. This means if the US restricts chip sales to allies, Huawei is ready to fill the gap. Finally, the dire predictions of catastrophe from models trained at current compute levels have not materialized, undermining the doomsayers' credibility.
The new stablecoin bill signals a new era for crypto
A significant victory for the crypto industry came when the president signed the stablecoin bill, known as the Genius Act, into law. The positive consequences of this legislation have been even greater than anticipated. This is true for the stablecoin industry itself, as a wide range of financial institutions are now embracing stablecoins more than ever before. More broadly, the law sends a signal to the entire crypto industry that a new day is dawning. It suggests the emergence of regulatory frameworks that are responsible but also allow the industry to flourish.
The political path to crypto clarity in the US
In the US, the next major piece of crypto legislation being worked on is the Clarity Act, which is a market structure bill. This bill is considered extremely important because it provides a regulatory framework for the 94% of crypto tokens that are not stablecoins. While the current SEC chairman, Paul Atkins, is viewed favorably, legislation is necessary to provide long-term certainty for founders deciding where to build their companies. Founders need stability that lasts 10 or 20 years, not just for the duration of a single administration's appointees.
The Clarity Act aims to canonize the rules in legislation, providing that long-term stability. The bill has already passed the House with substantial bipartisan support, receiving about 300 votes, including 78 Democrats. It is now in the Senate, where the challenge is to reach 60 votes to overcome the filibuster. Negotiations are underway with about a dozen Democrats, and there is optimism for its passage, drawing a parallel to the Genius Act for stablecoins, which secured 68 votes in the Senate. This legislation is seen as a critical step in moving from what was described as "Biden's war on crypto" to "Trump's crypto Capital Planet," establishing a foundation for the industry to focus on innovation.
President Trump's involvement was highlighted as crucial for passing the Genius Act. His election is credited with shifting the entire conversation around crypto. He was also directly involved in persuading votes and twisting arms to get the bill passed after it had been declared dead multiple times. A similar outcome is anticipated for the Clarity Act, despite the twists and turns of the legislative process.
The Democratic Party's shift toward woke populism
The future of the Democratic Party seems to be heading towards woke socialism. This is where the energy is in their base, and major figures in the party have endorsed candidates like Mamdani. There is little self-policing within the party to distance themselves from this wing.
This shift might be a miscalculated reaction to Donald Trump. The party may feel that establishment politics has failed and that they need a left-wing populism to compete with right-wing populism. However, policies like 'defund the police' and emptying jails are unlikely to work. New York may serve as a 'teaching moment' for the consequences of these policies.
The party does seem to be on the 20% side of every 80/20 issue. You know, opening the border, on the soft on crime stuff, releasing all the repeat offenders, and just sort of this anti-capitalist approach which I think will be disastrous for the economy.
This trend is concerning because it means American politics is no longer just playing within the '40-yard lines.' The potential outcomes of losing elections have become more extreme.
The political and judicial constraints on fixing San Francisco
San Francisco's mayor, Daniel Lurie, is considered the best the city has had in decades. However, his ability to enact change is limited by the city's political structure. San Francisco has a "weak mayor" system, where the Board of Supervisors holds a significant amount of power, which they have consolidated over time. This is compounded by the influence of left-wing judges.
A notable case that highlights these challenges is that of Troy McAllister, a repeat offender who killed two people on New Year's Eve in 2020. Despite a long criminal history and multiple arrests in the year leading up to the incident, he was out of jail due to the zero-bail policies of the former district attorney, Chesa Boudin, who was later recalled. The fact that a left-wing judge is now considering diversion for McAllister, which would mean his release, is described as insane and demonstrates the judicial hurdles the city faces.
These constraints raise questions about whether the current mayor can succeed. While the president has agreed to hold off on sending in the National Guard at the mayor's request, it remains an open question whether the mayor can fix the city's problems without such intervention, given the powerful forces working against him.
