Dwarkesh Podcast artwork

Dwarkesh Podcast

Why evolution designed us to die fast, & how we can change that – Jacob Kimmel

Aug 21, 2025Separator34 min read

What if aging isn't an inevitable decline, but an evolutionary feature designed to make way for the next generation?

Jacob Kimmel, the Co-founder and President of NewLimit, explains why evolution never selected for a long life and may have even selected against it. He proposes a core insight: because longevity was never optimized, it is a surprisingly solvable problem. Kimmel outlines a path to reverse aging by using transcription factors to reprogram our cells back to a youthful state, turning the tide on our own biological programming.

Key takeaways

  • Evolution never optimized for a long lifespan because, for most of human history, external dangers like predators and infections meant very few people lived to an old age anyway. There was simply no strong selective pressure to solve the problem of aging.
  • Major scientific breakthroughs often come from people under 30. This isn't just a cultural trend; it might be biological. Evolution likely selected for peak fluid intelligence around age 25, the time of maximum reproductive potential, with little reason to preserve it into an old age few would reach.
  • Evolution might actively select against extreme longevity. From a "selfish gene" perspective, an older, less fit individual consuming resources is a net negative for the gene pool compared to being replaced by younger, fitter offspring.
  • The fact that evolution didn't solve aging is actually a good sign for modern medicine. It prioritized fighting immediate threats like infectious diseases, meaning longevity is a less optimized, and therefore more solvable, problem for us to tackle.
  • Humans never evolved to produce our own antibiotics because we can't compete in the fast-paced evolutionary arms race. Microbes can mutate rapidly to develop resistance, while a high mutation rate in humans would just lead to cancer.
  • Evolution's secret weapon for innovation is "copy and paste." By duplicating a gene, the original can continue its essential function while the new copy is free to mutate and develop new capabilities without risking the organism's survival.
  • Don't expect a single magic pill to cure aging. It's a complex, multi-faceted problem, which is precisely why evolution never solved it with a simple fix. Future therapies will likely add healthy years incrementally, not stop the clock all at once.
  • Your cells age not because your DNA changes, but because their "epigenome"—the instruction manual telling genes what to do—degrades. The goal of epigenetic reprogramming is to reset this instruction manual, restoring youthful function to old cells.
  • A Nobel Prize-winning discovery proved that just four specific genes, out of 20,000, can be turned on to reprogram an adult cell all the way back to an embryonic state, demonstrating that cellular age is reversible.
  • Why is reversing aging so hard? Unlike creating stem cells, where success is visually obvious and self-amplifying, a rejuvenated cell looks like its old version and doesn't proliferate. This means anti-aging therapies must work with high efficiency from the start.
  • The ultimate drug delivery system won't be a pill or an injection; it will be our own cells. The future of medicine could involve engineering immune cells to act as "living pharmacies" that constantly monitor our bodies and deliver therapies only when and where they're needed.
  • You don't need to fix every cell to feel younger. The body is a highly interconnected system. Just as Ozempic targets a few cells but has widespread effects, rejuvenating a single organ or cell type can have cascading positive benefits throughout the entire body.
  • The effects of epigenetic reprogramming could last for decades from a single dose. The proof is your own body: the reason your tongue doesn't spontaneously turn into a kidney is that these cellular identities are incredibly stable and durable over a lifetime.
  • The best anti-aging tools might not be found in nature. We can already make a common cellular reprogramming process more efficient by using a mutated, "super" version of a natural gene. This suggests the future of medicine lies in creating synthetic genes that are better than anything evolution designed.
  • The hardest part of creating a new medicine isn't designing the drug itself. It's figuring out which gene to target. For most diseases, we simply don't know the right biological lever to pull to cure them.
  • The key to accelerating drug discovery is to build a "virtual cell"—an AI model that can predict how a cell will react to any genetic change. This would allow scientists to test millions of therapeutic ideas on a computer, finding the best ones before ever stepping into a lab.
  • Long-lasting cures create a strange economic problem: insurers won't pay for a 20-year benefit if the patient might switch plans next year. Future payment models might look more like subscriptions, where you "pay for performance," or financing plans that let patients pay for transformative treatments over time.
  • A third of all Medicare costs are spent in a person's final year of life. Proactive medicines that keep people healthier for longer could dramatically lower total healthcare spending by preventing the expensive, intensive care required when people are already critically ill.
  • Big Pharma has largely outsourced its innovation. Most new drugs—around 70%—are actually discovered by small, nimble biotech startups. The pharmaceutical giants then act like venture capitalists, acquiring these promising discoveries and funding them through the final, expensive stages of clinical trials.

Podchemy Weekly

Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.

Why evolution did not optimize for a longer lifespan

00:00 - 06:28

A key question in biology is why evolution seems to discard humans so easily. If longer, healthier lifespans allow for more offspring and better care for them, one might expect a strong selective pressure for longevity. Jacob Kimmel suggests looking at this through three lenses: positive selection, anti-selection, and the constraints of evolution as an optimizer.

The primary explanation lies in the "baseline hazard rate" during most of human evolution. This rate represents the daily probability of dying from any cause, such as being eaten by a predator, falling off a cliff, or succumbing to an infection. This rate was historically very high. Consequently, even without the effects of aging, few individuals would have reached old age. This meant there was very little selective pressure, or "gradient signal," for evolution to optimize genes for a longer healthspan. The number of people living long enough for aging to be their main limitation was simply too small.

This same logic can be applied to intelligence. The high hazard rate created a trade-off for brain development. While humans have longer adolescences to develop their larger brains, an overly long childhood would mean a higher chance of dying before reproducing. You cannot be a non-contributing "freeloader" for 50 years; you need to gather resources and have children. This constraint may suggest that intelligence is easier to evolve than we think, but evolution was limited in how much it could optimize for it.

Jacob Kimmel proposes a related hypothesis for why major discoveries, particularly in fields like mathematics, tend to be made by people under 30. This pattern holds true across different centuries and cultures, suggesting a biological rather than societal reason.

I think a much simpler explanation is that for whatever reason, our fluid intelligence is roughly maximized at the time where the population size during human evolution was maximal. If you had to pick an age at which fluid intelligence was selected most strongly for, it's probably around 25 or 30.

Since very few people lived to an old age, there was little evolutionary benefit to preserving high fluid intelligence later in life. From this perspective, human development can be seen as a difficult reinforcement learning problem. With a 20-year horizon from birth to reproduction and a simple reward function of having children, it's remarkable that any optimization signal managed to propagate at all.

Great scientific achievements often occur in a single year

06:29 - 07:39

In many fields, particularly science, great achievements often peak before the age of 30 and can even be concentrated into a single year. This concept is sometimes called the "annus mirabilis," or miraculous year. For example, Isaac Newton developed his groundbreaking theories on optics, gravity, and calculus when he was just 21.

Another example is Alexander von Humboldt, a once-famous scientist who is less known today. His entire career was based on a single expedition to South America. During this trip, he climbed Mount Chimborazo and observed how various ecological layers repeated across different latitudes and altitudes. This observation led him to formulate a modern understanding of how selective pressure operates on plants. The significance of this one man's work is still visible today. As Jacob Kimmel notes, when you see something named "Humboldt," it's almost always a reference to this one individual, not a common family name.

Aging as an evolutionary length regularizer

07:39 - 09:16

From an evolutionary perspective, there may be factors that actively select against longevity. The concept of kin selection, which takes a "selfish gene" view, suggests that evolution optimizes for the propagation of the genome, not for any single individual. Extending an individual's lifespan without also preventing the decline in their fitness could be a net negative for the gene pool.

If you're able to make a member of the population live longer, but you don't also counteract the decrease in their fitness over time, the number of net calories contributed to the genome as a function of that person's marginal year and their own calorie consumption is less than if you were to allow that individual to die and have two 20-year-olds follow behind them.

An aged population, even if still able to reproduce, could be detrimental to the genome's proliferation. This is because the genome might be better served by optimizing for population turnover and size at maximum fitness. This idea frames aging as a sort of "length regularizer," similar to how AI models are sometimes penalized for an overly long chain of thought. In this analogy, the total calories consumed over a lifetime act as a regularizer on lifespan.

How evolutionary constraints shape our vulnerability to disease

09:17 - 18:00

Evolution is subject to optimization constraints, similar to challenges in machine learning. While a two-layer neural network can theoretically approximate any function, we often struggle to optimize it effectively. The human genome faces similar hurdles with evolution as its optimization algorithm. The mutation rate limits the "step size" of evolutionary change. If the rate is too high, it can lead to cancer; if it's too low, adaptation is too slow. The number of genetic variants that can be tested in parallel is also limited by population size.

For most of human history, the primary selective pressure was surviving infectious diseases, not extending longevity. So even if there was some minor selective pressure for a longer life, the pressure to build resilience against pathogens was much stronger. This suggests that since evolution has not heavily optimized for longevity, it should be a relatively solvable problem for modern medicine. The success of simple modern medicines, like targeting a single gene, supports this idea.

This raises the question of why humans never evolved to produce their own antibiotics. Most antibiotics are metabolites from other bacteria and fungi, which are locked in a constant evolutionary arms race. This is explained by the Red Queen hypothesis, an idea from Lewis Carroll's Through the Looking-Glass.

The Red Queen is running really fast just to stay in place. So when you look at pathogen-host interactions or competition between bacteria and fungi that are all trying to compete for the same niche, what you find is they're evolving very rapidly in competition with one another. It's an arms race.

Microorganisms are better suited for this arms race. They have massive population sizes and can tolerate high mutation rates. In contrast, complex organisms like humans cannot tolerate high mutation rates, as even one cell with too many mutations could become cancerous. Therefore, even if the human genome stumbled upon an antibiotic, pathogens would likely evolve resistance to it very quickly.

Our genome contains evidence of these past evolutionary battles. The human gene TRIM5alpha, for instance, protects against an endogenous retrovirus that is now extinct. Tracing the gene's history reveals that a previous version in Old World monkeys protected against SIV, the primate cousin of HIV. It appears our genome sacrificed its defense against HIV-like viruses to fight the more immediate threat of this other retrovirus. The remnants, or "half bodies," of this extinct virus are still scattered throughout our DNA. Today, human cells could be edited to revert TRIM5alpha to its previous form, making them dramatically more resistant to HIV. This principle suggests that historical antibiotics, which pathogens have long since lost their defense against, could potentially be effective again.

Gene duplication allows evolution to experiment safely

18:00 - 21:52

Gene duplication is a key mechanism that helps explain how evolution adapts to new environments and pathogens. When a new environmental challenge arises, instead of mutating an existing gene and losing its original function, the genome can simply copy and paste it. This creates two copies of the same gene.

With a duplicate gene, the original copy can continue performing its essential job. The new copy, however, is free to mutate without strong selective pressure. Most mutations might be null or even break the gene, but this doesn't negatively affect the organism's fitness because the original backup is still functioning. This process allows evolution to find solutions that might require multiple edits, even if the intermediate steps are detrimental on their own.

It's really hard for evolution to find a path where you're able to make the first two edits because they're net negative for fitness. Through duplication, you can create a scenario where those first two edits are totally tolerated. They have no effect on fitness. You've got your backup copy.

Evidence for this is found throughout the human genome. Many genes are the result of duplication events, which is sometimes reflected in their names, like having a "gene 1, gene 2, and gene 3." These genes often have very similar sequences but have evolved specialized functions over time. In essence, when evolution faces a new problem, it doesn't start from scratch. It copies the parameters of a gene that is already close to a solution and then fine-tunes it, rather than trying to build a new gene from a random stretch of DNA.

Aging is not a monocausal problem

21:52 - 25:26

A key concern with longevity treatments is that they might fix only one part of the aging process, not the entire system. This idea is supported by an evolutionary perspective. Evolution likely didn't optimize against aging because it is not a single, monocausal problem that could be easily fixed.

Jacob Kimmel agrees with this view, stating that there is no single explanation for aging. While his own career focuses on epigenetics, which he believes explains a lot, it isn't the whole story. He argues against the idea of a simple solution, like finding one single "bad gene" to turn off.

It's not that there is some upstream bad gene X and all we have to do is turn that off and suddenly aging is solved. The most likely outcome is that when we eventually develop medicines that prolong health in each of us, it's not going to fix everything all at once. There's not going to be a singular magic pill. But rather you're going to have medicines that add multiple healthy years to your life.

Future treatments will probably add healthy years but will not stop all aspects of decline simultaneously. This complexity helps explain why evolution might not have selected for such partial fixes.

How epigenetic reprogramming could reverse cellular aging

25:26 - 30:58

Jacob Kimmel explains his work on epigenetic reprogramming. He uses transcription factors, which he describes as the orchestra conductors of the genome. They don't perform functions themselves but tell genes which to turn on or off. This layer of regulation, the epigenome, is the answer to how cells with identical genomes, like in an eyeball and a kidney, perform very different functions. The epigenome defines a cell's function by telling it which genes to use.

With age, the epigenome can degrade. These changes mean cells can't use the right genetic programs at the right times, making them less resilient and more susceptible to disease. The goal is to remodel the epigenome back to its youthful state. This might address many diseases linked to declining cell function. Jacob's team is searching for combinations of transcription factors that can shift the chemical marks on DNA back to how they were in youth.

However, this process is not straightforward. Each transcription factor binds to hundreds or thousands of places in the genome. There is no guarantee that reversing aging is a simple reset. Therefore, a critical part of the process is ensuring the changes are beneficial. They measure this in two ways. First, they check if an old cell's gene expression can be reverted to that of a young cell. Second, and more importantly, they test if the aged cell can once again perform its functions like a young cell. For example, can a liver cell process toxins effectively, or can a T cell respond to pathogens properly?

A scientist who won the Nobel in 2012 for some work he did in about 2007, discovered that you could just take four transcription factors and actually, just by turning on these four genes, turn an adult cell all the way back into a young embryonic stem cell. This is a pretty amazing existence proof that shows that you can reprogram a cell's tactics, type, and a cell's age simultaneously, just by turning on four genes out of the 20,000 genes in the genome... Just four genes is enough. That's a shocking fact.

The work of Shinya Yamanaka proved that reprogramming a cell's age is possible. The challenge is that his method also changes the cell's identity, which could cause tumors if applied in the body. Therefore, safety is paramount. Researchers must verify that a reprogrammed cell retains its original type. For example, a liver cell must remain a liver cell. They also check for other potential pathologies, such as a cell becoming hyperinflammatory or cancerous, first at the genetic level and then through functional checks in animals.

Why cellular rejuvenation is harder than creating stem cells

30:59 - 38:39

Shinya Yamanaka's discovery of the four transcription factors that can turn a somatic cell into a stem cell was a remarkable feat achieved with a relatively small team and a straightforward method. He started with 24 candidate factors and systematically whittled them down. This raises a question: why can't the same simple, brute-force approach be used for more complex problems, like reversing cellular aging?

Jacob Kimmel explains that Yamanaka's success was tied to two unique features of his specific problem. First, his success criterion was trivial to measure. The starting cells, fibroblasts, look and behave completely differently from the target cells, embryonic stem cells. Yamanaka used a simple reporter system that would make the successfully reprogrammed cells turn blue, making them easy to spot.

The second key feature was that success amplified itself. The newly created stem cells proliferate rapidly. Even though Yamanaka's initial efficiency was incredibly low—around 0.01%—the few successful cells would multiply into large colonies that were easy to find.

You can see with your eyeballs by holding the dish up to the light and looking for opaque little dots on the bottom. You don't need any fancy instruments.

The problem of reversing cellular aging lacks these two features. An old cell and a young cell of the same type look very similar; there's no simple visual marker for youth. Determining success requires complex molecular measurements, like single-cell genomics, to build a complete picture of a cell's state. Furthermore, success in rejuvenation doesn't amplify. For a medical therapy to be effective, it needs to work efficiently across many cells simultaneously, not just on a rare one-in-a-million cell that can then be grown in a lab.

This is where modern AI models become essential. The number of possible combinations of transcription factors is astronomical, estimated at around 10 to the 16. It's impossible to test them all. Instead, researchers can test a sparse sample of combinations, use single-cell genomics to measure the results, and then train models to predict the effects of combinations they haven't seen. This allows them to intelligently search the vast space of possibilities and identify the factors most likely to achieve a desired cellular state, like rejuvenation.

Transcription factors are evolution's levers on the genome

38:39 - 44:41

The process of biological development, where a single cell becomes a complex human, is orchestrated by transcription factors (TFs). These groups of TFs specify the unique fate of each cell. Evolution has optimized this system so that small changes, like swapping one TF in or out of a combination, can produce significant and different effects. This is crucial because mutations are inherently small, and for evolution to be effective, these minor edits at the sequence level must lead to meaningful changes in an organism's phenotype.

Jacob Kimmel suggests that evolution operates similarly to an optimization algorithm, where small, random modifications must produce meaningful steps toward a goal. He questions why the genome uses TFs at all, proposing that they allow a very small number of edits to the genome to create large-scale phenotypic differences. This makes them powerful tools for both evolution and medicine.

If I break a transcription factor, I can delete a whole cell type in the body. If I retarget a transcription factor to different genes, I can dramatically change when cells respond and have hundreds of their downstream effector genes change their behavior in response to the environment... In some ways, they might be like evolution's levers upon the broader architecture of the genome.

Jacob draws an analogy between this biological system and the attention mechanism in AI. He compares TFs to 'queries', the DNA sequences they bind to as 'keys', and the genes they regulate as 'values'. This structure allows for efficient and dramatic changes from minimal edits. This parallel extends to neuroscience, where researchers like Eddie Chang have found evidence of positional encodings in the human brain, similar to those used in large language models. This suggests that a path to understanding the brain may involve engineering AI and then searching for its architectural correlates in biology.

How HIV hijacks human transcription factors

44:41 - 47:20

A key question arises regarding transcription factors: if they are so effective for evolution, why haven't pathogens used them to attack their hosts, and why aren't more drugs designed to target them? Jacob Kimmel explains that some pathogens do, in fact, utilize transcription factors, with HIV being a prime example.

HIV is a retrovirus that integrates its genetic material into the host's cells. It needs a way to control when it replicates and when it lies dormant to hide from the immune system. This ability to go latent makes it incredibly difficult to cure completely. Even with effective drugs, a few dormant copies can reactivate later. This is known as the latent reservoir. Other viruses like Hepatitis B and C can exhibit similar latent behavior.

This is why HIV is so pernicious. You can kill every single cell in the body that's actively making HIV with a really good drug, but then a few of them that have lingered and hunkered down just turn back on.

HIV accomplishes this using a protein it encodes called tat. This protein activates a host transcription factor called NF-kappaB, a master regulator in immune cells. In simple terms, NF-kappaB increases the inflammatory response of cells, making them more likely to attack pathogens. HIV hijacks this system, using the activation of NF-kappaB to drive its own replication. This same circuitry allows HIV to remain dormant in cells where the host's transcription machinery is less active, thereby maintaining its persistent latent reservoir.

How new medicines can directly target transcription factors

47:20 - 50:19

Many existing medicines work by indirectly influencing the activity of transcription factors (TFs). Cellular signaling pathways often begin at receptors on a cell's surface and cascade inward, ultimately telling the cell which genes to use. Transcription factors are the final effectors in these cascades. Because of this, drugs that target upstream elements, like a receptor, are essentially taking a roundabout path to affect TFs.

We're kind of taking these crazy bank shots because we can't hit the TFs directly.

The reason for this indirect approach lies in the physical limitations of traditional drugs. Small molecule drugs are tiny enough to get inside a cell and reach the nucleus where TFs are located. However, they are too small to disrupt the large surface area where a transcription factor binds to DNA. On the other hand, larger drugs like recombinant proteins or antibodies are big enough to have an effect but are too large to pass through the cell membrane and get inside to the TF in the first place.

This is changing with the advent of new nucleic acid and genetic medicines. Jacob Kimmel explains that these new tools can overcome the delivery problem. For example, RNAs can be wrapped in a lipid nanoparticle, a sort of fat bubble, that can fuse with a cell and deliver its contents. Once inside, an mRNA can be used to produce a copy of a transcription factor, which then travels to the nucleus to exert its effect. These new methods finally allow transcription factors to be treated as first-class drug targets, rather than just an indirect consequence of other therapies.

Engineered cells could be the future solution for drug delivery

50:19 - 56:37

A central challenge in de-aging is delivery: how to get a therapeutic agent, like a transcription factor, to every single cell in the body. Jacob Kimmel explains that this complex problem can be simplified to delivering nucleic acids, as they can effectively drug almost anything in the genome. Today, there are two primary methods for this.

The first is lipid nanoparticles (LNPs), which are essentially "fat bubbles" that were used in the COVID vaccines. They naturally get absorbed by tissues that take up fat, like the liver, and can be modified to target other cell types. The second method involves viral vectors, such as AAVs. Jacob describes these as a kind of "very small delivery truck" that use the natural ability of viruses to enter cells. However, both methods have limitations. You can use sequence engineering to narrow down the target cells after a broad delivery, but you cannot use it to get the nucleic acid into cells it didn't reach in the first place.

Jacob shares a controversial opinion: by the year 2100, neither of these methods will likely be the primary way medicines are delivered. He believes the ultimate solution will mimic our own body's delivery system: the immune system. He explains how T-cells and B-cells are engineered by evolution to travel almost anywhere, identify specific signals, and then release a payload.

The way we will be delivering these nucleic acid payloads is actually by engineering cells to do it... Those cells might actually live with you. You probably will get engrafted with them, and they might persist with you for many years. They deliver the medicine only when the environment within your body actually dictates that you need it.

This approach has a major advantage. Unlike the small capacity of AAVs, cells have massive genomes with billions of base pairs, providing a large canvas to encode complex logic and infrastructure. This would allow for a more ornate and responsive therapeutic system integrated within the body.

The knock-on benefits of targeted cellular therapies

56:37 - 1:03:04

Most cells in the body are surveilled by the immune system, but there are a few "immune privileged" areas, like the brain and eyeball, where the immune system doesn't typically go. Gene therapies using viruses often target these areas to avoid an immune response, but this limits them to a small set of diseases. This limitation creates an opportunity for cell-based therapies to address the diseases that viral therapies cannot.

A key question is what happens if therapies can only target certain parts of the body. Will this lead to a strange state where some organs are young while others age normally? According to Jacob Kimmel, this is unlikely because the body is an incredibly interconnected system.

If you're able to rescue function even in one cell type, in one tissue, you often have knock on benefits in many places that you didn't initially anticipate.

Jacob points to several examples. When older people receive liver transplants from younger donors, they experience broader health benefits and better overall survival, not just improved liver function. Similarly, bone marrow transplants have been known to cure other, unanticipated diseases due to their systemic effects. The inverse is also true; breaking a single gene in a specific subset of T-cells in mice can dramatically shorten their lifespan, showing how a small change can have a massive impact.

Ozempic is a modern example of this principle. It is a hormone that acts on a small number of cells, yet its benefits extend beyond weight loss to cardiovascular health, addictive behaviors, and possibly even preventing neurodegeneration.

If someone told you I'm going to find a single molecule and I'm going to drug it and it's not only going to have benefits for weight loss, but also for cardiovascular disease, also possibly for addictive behavior and maybe even preventing neurodegeneration, you would have told them they were crazy.

This demonstrates that targeting even a small number of cells can have widespread health benefits. Therefore, even if delivery methods remain limited, reprogramming individual cell types or tissues could still add decades of healthy life due to the body's interconnected nature.

Transcription factors are well-suited for mRNA delivery

1:03:04 - 1:04:36

The number of transcription factors needed for efficacy is quite small, often between one and five. This is a manageable number that can be encapsulated within current mRNA medicines. In fact, some existing medicines already deliver many more unique transcripts. For example, combination vaccines for flu and COVID deliver around 20 different transcripts at once.

When you think about that already as a medicine that's being injected into people in trials, the idea of delivering just a few transcription factors is seemingly quotidian.

Another advantage is found in the body's natural biology. Transcription factors are expressed at incredibly low levels in the genome compared to other genes. This means that only a small number of copies need to be introduced into a cell to have a beneficial effect. Consequently, even low doses of these medicines, well within the range used for over a decade, can be highly effective. This suggests that neither the payload size nor the required dosage will be limiting factors for this therapeutic approach.

The long-term durability of epigenetic reprogramming

1:04:36 - 1:06:22

In principle, epigenetic reprogramming could be administered as a one-time dose with incredibly long-lasting effects. The proof that these epigenetic events can persist for decades already exists within our bodies. Our cells maintain their specific identities over a lifetime; for example, a tongue cell doesn't spontaneously transform into a kidney cell. This same mechanism is at play in long-living creatures like bowhead whales.

The existence proof that those epigenetic reprogramming events can last decades is that my tongue doesn't spontaneously turn into a kidney.

Research further supports this durability. Jacob Kimmel highlights the work of scientists like Luke Gilbert, who have made targeted genetic edits that allowed cells to divide over 400 times across several years in a lab. This demonstrates the resilience of these marks even under stress. Other studies in monkeys have shown similar editors can have effects lasting at least a couple of years.

While the upper limit could be a single dose that lasts for decades, current data shows positive effects persisting for several weeks. This suggests a more immediate and practical application could involve treatments every month or every few months. This would still provide dramatic, long-term benefits without requiring an impractical daily IV.

Synthetic transcription factors may be superior for cell reprogramming

1:06:22 - 1:08:10

Evolution has likely provided a reasonable set of transcription factors (TFs) for navigating the states human cells might occupy. The goal of reprogramming is to make an old cell look young, a state that arises in development, not to create a completely novel "Frankenstein cell." However, there is no guarantee that aging follows the same pathways used during development. This opens the door to the idea that the ideal reprogramming medicine might not be a simple composition of natural TFs.

The solution could involve TFs from other organisms or even entirely synthetic ones. An example of this is "SuperSox," a mutated version of the SOX2 gene that makes reprogramming somatic cells into pluripotent stem cells more efficient than the standard Yamanaka factors. This process, known as IPSC reprogramming, highlights a key insight.

IPSC reprogramming never happens in nature, so there's no reason to necessarily believe that the natural TFs are optimal.

Even simple optimizations, like mutagenizing one of the four known Yamanaka factors, can dramatically improve outcomes. This is a strong signal that there is significant room for improvement. The future of cell reprogramming may not lie in combining natural TFs, but in developing synthetic genes that have never existed before.

Eroom's Law and the diminishing returns of biopharma

1:08:10 - 1:11:03

The sagging of skin as we age is likely not just a cellular issue but a structural one. The protein responsible for skin elasticity is called elastin. It forms long, polymerized fibers during development that hold the skin in place. As people age, their bodies can still produce the individual units of elastin, but for unknown reasons, these units fail to polymerize into new, long fibers. This means the existing fibers cannot be effectively repaired or replaced.

Jacob Kimmel suggests a potential solution lies in programming cells into what he calls "extra-physiological" states. This would involve engineering a cell to reinvigorate the polymerization process, essentially enabling it to repair damaged elastin fibers. Since this process occurs naturally during development, it is physically possible. However, it might require creating a cellular state de novo, one that the human genome does not explicitly code for in adulthood.

This kind of challenge is reflected in a broader trend in the biopharmaceutical industry known as Eroom's Law. Coined by Jack Scannell, it is the inverse of Moore's Law. While Moore's Law describes the exponential increase in computing power, Eroom's Law describes the consistent decrease in new medicines invented per billion dollars invested. This trend of diminishing returns has persisted since the 1950s.

Interestingly, this mirrors the scaling laws in machine learning, where more input also yields diminishing outputs. However, the economic outcomes are opposite. In AI, this trend has driven more investment toward creating powerful, general-purpose models. In biotech, it has suppressed valuations and excitement because investments are spread across many different bespoke drugs rather than a single platform.

Building a general purpose model for biology

1:11:03 - 1:19:21

A key difference between scaling in machine learning and drug discovery is the nature of their returns. In ML, returns are expected to increase super-exponentially, with the ultimate goal of AGI holding immense value. In contrast, drug discovery faces what is known as Eroom's law, where the cost of developing new drugs rises over time without a corresponding increase in revenue potential. While ML models become more general and valuable with larger investments, a successful drug for one disease doesn't inherently make it easier to cure another.

Biotech firms often develop expertise in creating molecules to target specific genes, but this isn't the primary bottleneck. The biggest challenge is identifying which gene to target in the first place to effectively treat a disease.

Figuring out what to make an antibody to target is the hard thing about drug discovery. What gene do I intervene upon in order to actually treat a disease in a given patient? Most of the time we just don't know.

Jacob Kimmel argues that the list of high-conviction biological targets that we simply lack the molecular tools to interact with is relatively small. The far greater problem is the vast, unexplored space of potential targets for the majority of human diseases. As proof, he points to animal models. In animals, scientists have 'superpowers' like transgenic systems to engineer genomes and test biological hypotheses directly. If the only obstacle were creating the right 'molecular hook,' we would have already cured many more diseases in these models. The fact that we haven't suggests the core problem is identifying the right biological target.

To overcome this and introduce compounding returns to biotech, the field needs a general purpose model. One popular concept is the 'virtual cell.' This approach involves measuring a cell's state, perturbing it by altering genes, and then measuring the resulting changes. The goal is to collect enough data to train a model that can predict how a cell will respond to any combination of genetic changes. This would enable scientists to search 'in silico' for interventions that could revert a diseased cell to a healthy state, bypassing the need to physically test an astronomical number of combinations.

Applying LLM principles to overcome past biological hurdles

1:19:21 - 1:26:57

A general-purpose model of a cell can be trained by predicting its entire state, not just whether it appears old. Jacob Kimmel explains they do both. First, they train a base model to predict the effect transcription factors will have on every gene in a cell. This model learns the objective reality of cellular mechanics, similar to the pre-training phase of a Large Language Model (LLM) which builds a general representation of the world.

On top of this base model, they can add specific "value judgments." For example, one layer can be trained to predict if a cell's state looks younger. This allows them to select for interventions that promote youthfulness. This approach is highly flexible and could be applied to other goals, such as making T-cells less inflammatory for autoimmune diseases or improving liver cell function for metabolic syndrome.

This is so similar to in LLMs you have first imitation learning with pre training that builds a general purpose representation of the world. And then you do RL about a particular objective in math or coding or whatever that you care about. And you are describing an extremely similar procedure.

This parallel to LLMs is a source of optimism. While Jacob clarifies they don't use reinforcement learning (RL) yet, the conceptual framework is similar. This base model, which predicts the objective outcome of genetic perturbations, could enable compounding returns in drug discovery. The challenge then becomes defining the desired cellular states to engineer.

This approach builds on earlier technologies like Perturb-seq, which emerged around 2016. Perturb-seq allows researchers to introduce a genetic change to a cell and then read out its effects. However, progress was slow due to technical hurdles. Early on, the cost of sequencing a single cell was measured in dollars; now it's in fractions of a cent. Furthermore, the process of labeling the genetic changes was unreliable. Jacob explains that in the early days, researchers could only identify the specific perturbation in about 50% of cells. This is like running an experiment where every other test tube is mislabeled, making it impossible to draw meaningful conclusions.

Imagine you hired someone who every other tube labels it wrong. So when you then collect data from your experiment, you basically have no idea what happened because you've just randomized all your data labels. You wouldn't do much science and you wouldn't get very far that way.

These combined inefficiencies meant that only a very small number of cells yielded usable data. Improvements in cost and labeling accuracy are now making this technology ready for prime time.

Scaling biology and rethinking pharma's business models

1:26:57 - 1:36:00

Recent technological improvements now allow for operating biological experiments at scale. In the past, trying to alter more than one gene at a time was difficult. Jacob Kimmel explains the statistical challenge: if you could only correctly detect a single gene change 50% of the time, the chance of correctly identifying five changes in one cell would be minuscule. This meant most data would be mislabeled. Now, these technical issues are largely solved. Experiments involving millions of cells can be conducted in a single afternoon, a scale that was once a proof-of-concept for the technology's creators.

This progress parallels the development of large language models (LLMs). The goal is to create a 'GPT-3 equivalent of the virtual cell,' a foundational model that other companies could use. However, the biological data needed for such a model isn't widely available like the text on the internet. Jacob describes the current state of biological data as being like the web in the early 1980s. Because of this, his company, Neulament, is vertically integrated. They are generating their own proprietary data for a very specific problem: predicting how transcription factors affect particular cell types. This is like building a high-quality, niche 'Wikipedia' to train a specialized model, rather than a general one.

We're carving out a region of this massive parameter space and saying if we can learn the distribution of effects even just in this small region, it's going to be really effective for us and we can make really amazing products.

As these advanced therapies are developed, the pharmaceutical industry faces new business challenges, particularly around intellectual property and payment models. While a gray market for molecules might emerge, Jacob believes most people will prefer a legitimate prescription from their doctor over ordering a 'sketchy vial' from an unverified source. The bigger issue is reimbursement for long-term, durable medicines. In the U.S., people frequently change insurance providers.

The average person churns insurers every three to four years. That means that if, for instance, you had a medicine which dramatically reduced the cost of all other healthcare incidents, but it happened exactly five years after you got dosed with it, no insurer is technically economically incentivized to cover that.

A potential solution is a 'pay for performance' model, where the cost of a drug is paid over time, contingent on it continuing to work. This would align incentives, as each insurer would only pay for the period a patient is on their plan. Another trend is the move toward a direct-to-consumer model for medicines that promote general health and have noticeable daily benefits. An example is Lilly Direct, which allows patients to get prescriptions filled directly from the manufacturer. This model could enable financing options for long-term treatments, similar to how people finance other large purchases.

How preventative medicine could lower systemic healthcare costs

1:36:00 - 1:40:19

Healthcare currently accounts for 20% of the US GDP, a figure that is rapidly growing. The vast majority of this spending is not on developing new treatments but on administering existing ones. Even if a new therapy costs only tens of dollars to manufacture per patient, the total cost to the healthcare system can be tens of thousands of dollars. This includes doctor visits, scans, prescriptions, and follow-up monitoring.

Guest Jacob Kimmel notes that drugs themselves are only about 7% of total healthcare spending. The core question is whether new technologies, like de-aging therapies, will increase or decrease this overall spending. Jacob argues they will likely decrease it by preventing the downstream ailments of aging. He proposes shifting the focus to "medicines for everyone," which keep people healthier for longer, rather than only treating them once they are already very sick. This approach could help people avoid the intensive, expensive parts of the healthcare system, like inpatient hospital visits.

A third of all Medicare costs are spent in the final year of life. Which is shocking when you realize that the average person on Medicare is probably a decade plus covered by it. And so there's an incredible concentration of the actual expenses once someone is already terribly sick.

By preventing severe illness, these new medicines would shift the economic burden from the administrative system to the pharmaceutical system. Jacob points out that the pharmaceutical sector is the only part of healthcare where technology has driven efficiency. Thanks to the "grand social contract" of intellectual property, drugs eventually go generic, making them more affordable over time. As a result, pharmaceuticals offer more benefit per dollar today than ever before, even as other healthcare costs have skyrocketed.

How big pharma externalizes R&D to biotech startups

1:40:20 - 1:44:36

Many large pharmaceutical companies are not building general models for drug discovery in-house. Jacob Kimmel explains that the industry has largely bifurcated. Smaller biotech companies take on most of the early, pioneering discovery work, while large pharmaceutical firms often act like venture capital firms.

These larger companies have externalized much of their R&D. They have 'external innovation' divisions that function like corporate venture arms, partnering with the biotech ecosystem. According to Jacob, this model is quite effective. He notes that approximately 70% of molecules approved in a given year originate from small biotechs, even though the majority of R&D spending appears on big pharma's balance sheets. This is because big pharma typically funds and runs the expensive later-stage clinical trials.

The market structure is like you have a bunch of biotechs which are kind of like the startups in your ecosystem and then they're working with something like an oligopsony of pharmas where it's a limited number of buyers for this particular type of product.

This creates a liquid market for therapeutic assets that are ready for Phase 1 or Phase 2 trials, which is the point where partnerships between biotechs and pharmas often form. However, not every company follows this trend. For example, Roche acquired Genentech and has a significant internal research group led by Aviv Regev, a pioneer in the field.