Core Memory artwork

Core Memory

Is The Era Of AI-Designed Drugs Actually Here? - EP 50 Josh Meier and Jack Dent

Dec 24, 2025Separator26 min read
Official episode page

Josh Meier and Jack Dent, founders of the startup Chai Discovery, discuss the new era of AI-powered drug design.

They explain how the same technology behind language models can now decode biology, generating entirely new and promising drug candidates in just days instead of years.

This leap forward could lead to safer, more precise medicines and turn previously "undruggable" diseases into solvable engineering challenges.

Key takeaways

  • The same language models that comprehend human communication can be applied to biology by treating components like proteins and DNA as a language to be decoded.
  • The application of language models to protein folding wasn't a random guess; it was based on the discovery that the math used in older physics-based protein models is fundamentally the same as that in LLMs.
  • A compelling startup strategy is to focus on high technical risk but low market risk, which allows a team to concentrate on building unprecedented capabilities rather than just commercializing existing technology.
  • The biotech industry is a compelling field for software engineers because it is a massive market on the verge of transformation but currently lacks a strong, rigorous engineering culture.
  • Just as top talent was once criticized for focusing on ad tech, a similar argument can be made today about the concentration of talent on chatbots, while opportunities for major scientific discovery remain.
  • While AI drug discovery has been hyped for years, the technology has recently made a fundamental leap. It can now generate entirely new compounds from scratch, not just optimize existing ones.
  • In drug discovery, a 15% success rate for an AI model is a massive leap forward from traditional methods that had near-zero success rates. This level of accuracy is already incredibly useful and commercially viable.
  • AI models can design molecules with superhuman accuracy, a fundamentally different form of intelligence that humans cannot replicate even with unlimited time.
  • Drug discovery faces an 'exploit vs. explore' dilemma: companies could release new medicines today, but holding off for a year could result in molecules that are ten times more effective as the technology rapidly improves.
  • AI is becoming so efficient at designing new drug molecules that the biggest challenge will soon be the bottleneck in clinical trials needed to test them.
  • Today's medicines often have numerous unintended side effects. AI enables the design of therapies with atomic-level precision, which could lead to far more effective and safer drugs.
  • AI models can accelerate progress by designing their own experiments, creating a powerful self-improving feedback loop where the results are fed back into the model.
  • Progress in AI drug discovery can be so rapid that a model's capabilities may outpace researchers' ability to test for them. An AI was found to be generating viable drug candidates before its creators even thought to check for those properties.
  • AI models are making novel scientific discoveries by creating molecules vastly different from any that exist, exploring creative paths that humans would not.
  • AI is turning 'undruggable' medical problems into solvable engineering challenges by designing molecules that can be tested directly on patient cells, bypassing traditional discovery hurdles.
  • A profound capability of new AI models is their ability to assess their own confidence. They can essentially predict which lab experiments are likely to work, allowing researchers to focus their efforts on the most promising paths.
  • AI in biotech represents a fundamental platform shift, similar to the web or mobile, that will create new S-curves of innovation and unlock new types of medicines.
  • Even with a machine that can design perfect drugs, the majority of the cost lies in clinical trials and manufacturing. This resource-intensive process means a commercial incentive is still needed to fund the pipeline for new medicines.

Podchemy Weekly

Save hours every week! Get hand-picked podcast insights delivered straight to your inbox.

Applying language models to understand cellular communication

00:00 - 00:23

The speaker always wanted to find a way to bring programming and biochemistry together. Language modeling appeared to be the solution. While at OpenAI, observing the results of language models sparked a key insight.

If a computer can understand how you and I communicate, why can't it understand how cells communicate? Proteins, DNA. What if we just feed all of that into a language model?

The central idea was to apply the same principles used for understanding human language to the language of biology, treating components like proteins and DNA as data for a language model to interpret.

The early paths of two founders through Harvard, Stripe, and OpenAI

03:05 - 05:54

Josh's interest in biology started in an unusual place: a stem cell lab at his public magnet high school in New Jersey. Though he had been coding since he was a kid and attended the school for its math team, he quickly became captivated by the lab.

I learned that we had a stem cell lab in our school. And I was like, how am I not spending all my time here? So I got hooked on biology early on.

His professional journey began early as well. After first applying to Stripe out of high school, he eventually interned there at age 18 after his freshman year of college. He considered dropping out for a few years but ultimately returned to the company after graduation.

Josh and Jack met on their first day of classes as freshmen at Harvard. Jack recalls they were in an accelerated, and intentionally difficult, computer science course together.

We met in the first day in one of those classes and then spent a lot of late nights in the library together getting through those problem sets after that.

After college, Jack went to work at Stripe, while Josh joined OpenAI in its early days. He was there when GPT-1 and GPT-2 were developed, giving him a front-row seat to the rapid advancements in language models. Initially hired to work on reinforcement learning, he witnessed the shift in focus as language models became the main buzz.

Applying language models to the language of biology

06:52 - 08:29

Josh Meier, despite having a strong computer science background, formally majored in chemistry. He always sought a way to merge his passions for programming and biochemistry. The solution appeared in the form of language models. While at OpenAI, observing the progress of these models, he had a key insight.

If a computer can understand how you and I communicate, why can't it understand how cells communicate? Proteins, DNA. What if we just feed all of that into a language model? Something as interesting as bound to happen.

During his time at OpenAI, when it was still a nonprofit, the environment was fluid, with people coming in and out for research projects. The focus was on the long-term journey toward AGI. Josh worked on reinforcement learning and the tuning of language models using the GPT-1 backbone. He notes that this approach didn't work very well at the time but is the foundation for RLHF (Reinforcement Learning from Human Feedback), which is highly effective with models like GPT-4. Ultimately, OpenAI's intense focus on AGI led him to move to Facebook.

How early AI research connected language models to protein folding

08:29 - 13:07

Josh explains that at Facebook's AI research arm, FAIR, a group called the Garage Team was focused on applying AI to significant challenges, including science. This created an open environment for his ideas about protein folding. The initial approach wasn't entirely obvious; it was more experimental. Josh describes it as taking large biological datasets and LLMs and essentially putting them together to see what would happen in a research lab setting.

However, there was a deeper theoretical connection. For the past 20 years, bioinformatics involved finding similar protein sequences and using a basic machine learning model from physics called a Potts model. It turns out the mathematics behind these Potts models is fundamentally the same as the math used for modern language models. This provided a strong foundation to explore the idea.

This work happened concurrently with other major developments. DeepMind's AlphaFold 1 was released just a few weeks before Josh started at Facebook. Jack mentions that he and Josh would meet for dinner every few months, where Josh would share updates on his work. Jack even shares a story from when Josh was leaving OpenAI.

I got a Facebook message, because that's of course, how people communicated back then, from Sam Altman saying, 'Hey, this guy Josh is leaving OpenAI. I want to do some proteomics spin out. Should I work with him?' And I responded saying, 'Yeah, Josh is amazing, but I'm pretty sure that he's pretty set on this Facebook thing. Good luck.'

Josh agreed it was too early to start a business then, believing another five years of research were necessary for the technology to be commercially viable. The field has only recently transitioned from an advanced research lab project to something that can fundamentally change how science is conducted. This all took place around January 2019, when Josh joined Facebook. When they later started their company, Chai, they were picking up the thread from five years prior.

Josh's journey from Facebook's protein models to a high-risk AI startup

13:08 - 18:50

Upon arriving at Facebook, Josh Meier immediately began building language models for proteins, inspired by recent advancements like Bert, AlphaFold, and GPT. Within three months, his team produced their first result: the ESM1 model, the first transformer language model for protein sequences. Initially, they tested it with basic tasks, such as determining if it could differentiate between positively and negatively charged amino acids or predict the impact of a mutation on a protein's activity. The model operated on a "fill in the blank" principle, where an amino acid was removed, and the model would predict what should be there.

You just leave one out and then you ask the model, what would have been the impact if I used this mutation versus this one? And turns out that's a pretty good proxy of what ends up happening in the lab.

The research structure was initially informal, with a small group pushing the project forward. Interestingly, Facebook's internal review system once highlighted the team's protein paper as a representative "tier one" research result. Josh found this both flattering and funny, as it wasn't core to Facebook's business but signaled a commitment to ambitious, long-term AI research.

After Facebook, Josh joined Absci, a biotech company. This move provided him access to a sophisticated wet lab, something Facebook lacked, allowing him to advance his research program at the intersection of AI and drug discovery. He worked there for about three years.

Later, he and Jack Dent decided to build a company together, focusing on a specific philosophy. Jack explains they were drawn to ventures with high technical risk but low market risk. Instead of commercializing existing technology, they wanted to create entirely new capabilities, akin to developing a GPT-4 or GPT-5 level model from scratch.

If you think about the two axes of any company which are the principal components of risk space, being market risk and technical risk, if you can play in the moderate to high technical risk, but low market risk segment, it's a really interesting place to be because you really get to focus on just building the best systems.

This path is harder than simply launching a product on day one, but they found the challenge of building something nobody had ever seen before more exciting.

Applying engineering rigor to the biotech industry

18:50 - 23:16

A former Stripe employee, who joined when the company had around 500 people and left when it had 8,000, explained his motivation for moving from finance into biotech. His time at Stripe taught him two things: good engineering can take you a very long way, and a small group of talented people can move incredibly fast. This experience led him to look for a field where a rigorous engineering approach was not yet prevalent.

He identified the biotech industry as a massive opportunity. It's one of the largest industries in the world, on the brink of a profound transformation, yet it hasn't attracted many software professionals. He saw it as a field calling for more people with an engineering background. Coupled with the potential to improve many people's lives, the choice seemed clear.

This reflects a broader conversation about where top talent is focused. In 2012, Jeff Hammerbacher, a data pioneer from Facebook, famously critiqued the tech industry's focus.

The best minds of my generation are thinking how to make better ads.

A similar argument could be made today, with many intelligent people working on chatbots and coding helpers. This raises the question of whether current AI technology can move beyond incremental improvements and achieve genuine scientific breakthroughs. Amid fluctuating sentiment about AI's capabilities, there is a desire to see the technology figure out something a human can't, moving beyond tools that are merely cool but not transformative.

The role of AI in designing synthetic antibodies

23:34 - 30:26

Each year, between 40 and 60 new medicines are approved, and roughly half of them are antibodies. Antibodies are molecules that your own body produces to neutralize threats. Decades ago, scientists had the idea to create synthetic antibodies that could be given to everyone, rather than relying on an individual's immune system to generate them.

These synthetic antibodies are designed to be highly selective, targeting specific things in the body like a cancer mutation. They stick to the target and recruit other immune cells to clean up the problem. A successful example is Keytruda, a synthetic antibody used for cancer treatment which signals the body to address a tumor. While antibodies are often more targeted than small molecule drugs, achieving selectivity is still a major challenge for both.

This is where AI offers a massive shortcut. The traditional process for discovering an antibody is a slow, brute-force search. One method is an immunization campaign, where an animal is injected with a target, and then its antibodies are extracted and screened. Another involves growing hundreds of millions, or even billions, of potential antibody molecules in a lab to search for the one "needle in a haystack" that works. This process can take months or even a year and doesn't always succeed, especially for difficult targets.

With machine learning, a target can be plugged into a model which then designs a potential molecule. According to Josh, 15-20% of the molecules designed this way now work as expected when tested in the lab, which is a dramatic improvement. The AI can achieve incredible specificity, even distinguishing between a cancer form of a protein and a healthy form based on very subtle differences.

The brute force approach to drug discovery

30:26 - 32:16

When developing a new drug, the process often begins with an idea to hit a specific biological target. In the modern era, a machine learning model can help translate that idea into a specific molecule to test in a lab. However, the pre-AI approach was much different.

Historically, the process started by identifying a marker, such as one on a cancer tumor, that an antibody should bind to. Scientists would then create that protein target and begin a massive, brute-force search for a molecule that would stick to it.

We'll go fish out among a billion possible random molecules, try to find one that sticks. You'll often find one that does stick. Maybe it sticks weakly. Maybe it also sticks to other things. We'll try to clean it up.

This initial search is not precise. Jack explains that this starting point is discovered through what is essentially a random process. Once a candidate molecule is found, protein engineers begin the iterative work of making mutations and subtle changes to optimize it and prepare it for clinical trials. However, getting to that engineering stage first requires a starting point, which has traditionally been found through sheer force.

The start of that process has been quite literally random brute force. As you get something which you think is maybe doing the right thing, then you might have people go in and start to optimize that and start to tweak the sequence of your antibody by hand.

AI models are accelerating drug discovery from years to days

32:16 - 39:55

Certain animals, like llamas and sharks, have unique immune systems that are useful in scientific research. They produce single-chain antibodies, which are simpler than the two-chain antibodies found in humans. Josh explains that while sharks also make these antibodies, they are much harder to care for than llamas, so the industry has largely shifted to using llamas and other camelids.

One of our board members, Michael Dston, he used to be the chief scientist at Pfizer and in one of his earlier jobs he actually helped set up a shark colony because sharks also make those kind of antibodies. Sharks are harder to take care of than llamas. So I think a lot of the industry has moved to llamas.

This biological quirk was part of the traditional, slower process of drug discovery. Finding a molecule like Keytruda used to take months, or even a year in a pharmaceutical company setting. This involved discovering an initial molecule and then performing several rounds of optimization to make it more drug-like.

AI has dramatically accelerated this timeline. Now, a model can be prompted with a target, and it will generate a set of candidate molecules within a day or two. A significant portion of these molecules not only function as intended but are also drug-like right from the start, eliminating the long process of initial discovery and optimization.

Jack: We see that we can take that maybe three to six month to year long process, cut that down into something which takes 24 hours on the computer, maybe two days and then something which you can validate in the lab in a matter of weeks.

It's natural to be skeptical, especially when consumer AI like ChatGPT can often be unreliable. However, the success metric in biology is different. Josh notes that their models have about a 15% success rate, meaning if they generate 100 molecules, about 15 will bind as expected. While a 15% accuracy rate would be unacceptable for a chatbot, it is a revolutionary improvement in drug discovery, where traditional methods might have a success rate of a few in a billion.

This 15% rate is highly effective because testing a small batch of 20-50 molecules is cheap and easy. The value has already been created. The focus is now shifting from simply improving the success rate to tackling problems that were previously impossible, targeting diseases where the traditional success rate was effectively zero.

The more interesting challenges now that that's been solved to some degree are pushing into things where the success rate today is effectively zero percent... doing something which nobody could do with the existing methods. And that's what I think is so exciting about this.

AI's superhuman ability to design novel medicines

39:55 - 45:51

Drug discovery involves searching through an astronomical number of molecular combinations. New AI models are transforming this process, reducing a multi-month or multi-year experimentation phase to just a couple of days. This technology provides a strong hint about where researchers should focus their time and resources.

Jack points out that this represents a form of superhuman intelligence. No human on earth can be given a target and then design a molecule sequence that will bind to it, especially one that is also "drug-like." This capability is a significant leap forward, comparable to breakthroughs like AlphaGo for the game Go or AlphaFold for protein folding.

These models can do things which a human just simply cannot do. It's a fundamentally different form of intelligence. If you gave me a pen and paper and asked me to work for years, I don't think I could design or fold a single protein to the same quality that these models can come up with.

A natural question arises: if these models are discovering novel medicines, why aren't they being rolled out to patients immediately? Josh explains that while the most obvious benefit is accelerating the existing process, the true passion lies in creating medicines that were previously impossible. This involves tackling challenging targets, such as finding molecules that selectively work on a mutant cancer cell versus a healthy cell. The models have already demonstrated success with a handful of these difficult examples, and the newly discovered antibodies are all novel, proving the system can generalize beyond its training data.

The reason these discoveries aren't immediately developed into therapies is a classic "exploit versus explore" dilemma. While they could pursue the molecules found today, the models are improving so rapidly that a version created a year from now might be ten times better. The likely path forward for the biopharma industry is a continuous cycle. Molecules will be launched to help patients today, and as the technology improves, better versions will follow. The models themselves don't seem to favor one disease area, like cancer or autoimmune disorders, over another. They work across the board, reinforcing the mission to turn biology from a science into an engineering discipline with versatile, fundamental tools.

AI models are creating a new era of precision medicine

45:51 - 55:02

Creating a medicine involves several steps: first, identifying the right biological target; second, designing a molecule to interact with that target; and third, running clinical trials. AI models have become exceptionally good at the middle step, which is designing a molecule for a known target. This new efficiency is expected to create a massive bottleneck in the coming years. There will be a backlog of promising new medicines, but the process of running clinical trials remains long and expensive.

This situation raises a societal question: do we accept the slow, expensive status quo of clinical trials, or do we find ways to accelerate it? The FDA is already considering changes, such as loosening requirements around animal testing, which could shave time off the process. The rules governing medicine are not static; they have evolved over the last century and will likely continue to change as technology advances.

The accuracy of these AI models is approaching, and in some cases surpassing, the resolution of traditional lab experiments. A model can analyze billions of data points to get a complete picture, while an individual lab experiment on ten mice might have statistical noise. This precision could lead to a future where therapies are tested in highly accurate simulations.

A significant potential benefit is the reduction of side effects. Most drugs have unintended off-target effects. The average drug has about five, while aspirin has between 20 and 25 known ones. AI allows for the design of molecules with atomic-level understanding, which could dramatically reduce these unwanted interactions. This leap in capability prompts a new perspective on our current methods.

We all believe that when we look back on the last 50, 100 years of medicine, that 100 years from now we'll look back on this as almost the Bronze Age of just like brute force of how we deliver medicines to humans.

These models are trained on vast amounts of public and proprietary biological data, including DNA and protein sequences, 3D structures, and protein interaction measurements. Unlike some AI fields that may have exhausted available training data, the world of biological data is still just beginning to be explored.

The model designs its own experiments to accelerate learning

55:02 - 59:43

The models are not just passive learners; they actively help create the data they need to improve. This creates a feedback loop where the model designs experiments for the wet lab, and the results of these protein tests are then fed back into the very same model. This process is similar to how user conversations help fine-tune LLMs.

That's the model actually designing the experiments and then getting data that's fed back into that same model. Which is like a wild idea, right?

This progress has been surprisingly fast. The team expected it would take many more years to achieve these kinds of results. Models that were just research curiosities a few months ago have now become incredibly valuable. The molecules they are developing not only look like drugs but can also address difficult targets.

It feels like it's one of these things where nothing was working and then suddenly everything just started working in one fell swoop.

When asked about competition, the key differentiator for the small team of about 20 people is a relentless focus on solving a specific problem. This required perseverance, as early attempts at zero-shot antibody design failed. However, the team's dedication paid off, with progress happening much faster than anticipated.

When you're ready to spend two years on a problem and you get the answer in three months, that's very energizing.

Running these models is expensive and requires significant capital. However, the potential value that can be unlocked makes the investment completely worth it.

The surprising speed of progress in AI drug discovery

59:43 - 1:07:57

The pace of progress in AI-driven drug discovery is so fast that statements made just a few months ago can become outdated. Jack notes that recently, his company's model, Chai 2, has started producing antibodies with many desirable "drug-like properties"—a significant leap forward. Initially, the team thought this was the next major challenge to solve.

It was crazy to see how fast that was solved. Turns out that that model was already capable of doing that and we just never tested. We're like, 'whoa, like they actually have the drug-like properties already.' We're even further ahead than we even thought we were.

Josh explains that they were surprised to find the model already had this capability once they started baselining its outputs. These drug-like properties are crucial for developing a viable therapy. For example, a good antibody shouldn't aggregate or bind to itself, which can increase viscosity. High viscosity might require a drug to be infused rather than administered through a simple subcutaneous injection, which is much more patient-friendly.

When asked why every biotech company isn't pursuing this, Josh points to the scarcity of talent. He suggests there are very few people globally with the skills to build these frontier models.

I think there are more than 20 people capable of building it. But I think also it's just a matter of when it comes to the quality of the level of rigor around actually building these models... Progress is not leveling out here. In fact, this is really just the very beginning for this field. We're two years in and I think the competition in this space is going to be won and lost over decades, not in the next six months.

Many large pharmaceutical companies recognize this challenge. They've often made a strategic decision not to build these models in-house due to the immense expense of acquiring both the talent and the necessary computing power, with no guarantee of producing a cutting-edge result. Jack adds that success is about more than just the model. A huge amount of infrastructure is required for lab validation and software integration to turn a model's output into a real-world application.

The model is definitely the exciting thing to be talking about on the podcast, but there's a ton of other harness and infrastructure that goes into like really making this thing a reality.

AI models are now making genuine scientific discoveries

1:07:57 - 1:14:43

AI models are now capable of making genuine scientific discoveries, creating compounds that a human would likely never stumble upon. Quantitatively, the molecules generated by these models are vastly different from any existing ones. This is similar to how AlphaGo started making moves in Go that no human had ever played before; the AI is exploring paths a human probably would not.

What is the model thinking is going on here? What does the 3D structure look like of how the potential drug is interacting with the target? And it does some really creative things.

Despite some market concerns about a slowdown in AI, the on-the-ground reality is one of insane and endless demand. Companies are struggling to acquire the latest GPUs due to massive investment. The progress in this domain over the past two years is described as almost unimaginable, promising immense economic and human value. Domain-specific models, in particular, are doing things that are fundamentally new.

This progress comes after years of skepticism. The host notes that he has heard promises about AI drug discovery for nearly a decade, leading him to feel it was mostly hype. This sentiment is acknowledged by the guest, who agrees that the field has been perceived as smoke and mirrors for a long time.

AI drug discovery as a field has been around for a long time... and it's been smoke and mirrors for a long time. But this has really been the year when things have started to work.

The fundamental leap is the ability to generate entirely new things without a starting point. Previous work largely focused on redesigning or optimizing existing drugs. Now, the technology is at the beginning of an S-curve, with a significant impact on real medicines and patients just starting to kick off.

How AI makes previously undruggable targets accessible

1:14:44 - 1:17:48

Some medical targets are considered "undruggable" because they are very difficult to target with traditional methods. Josh explains that a key example is a category of proteins called GPCRs, which are embedded in a cell's membrane. Their structure makes them hard to work with. If you take a GPCR out of the cell membrane to study it, it no longer looks the same, which complicates the process of finding a drug that can bind to it.

AI offers a new approach. Instead of traditional screening, AI can generate a small number of molecular designs that can then be tested directly on the cells. This allows for rapid verification. This process could eventually be done with a patient's own cells, especially if they have mutations or specific signals that need activation. This transforms a difficult discovery problem into a more manageable engineering problem, where solutions can be designed to solve a specific issue.

When it comes to a realistic timeline for therapies developed this way, Jack provides a timeframe for the first AI-designed therapies to reach patients.

I think we'll see the first de novo design therapies from our models at least going into patients, I'd say certainly less than three years, probably more than nine months.

Surprisingly, the biggest bottleneck in this process isn't the design or even the initial animal testing. The slowest part is manufacturing the molecule in large enough quantities for testing. This process is complex and highly regulated, resembling brewing beer but in massive, sterile vats. However, there are positive signs. Recent reports show that the yield of these manufactured antibodies is looking good, and there's potential for even further improvement.

How AI is changing the business of drug discovery

1:17:48 - 1:21:46

The primary focus is on improving AI models for drug discovery because of the massive impact they will have. The business strategy is still being determined. Traditionally, biotech companies either develop their own blockbuster drugs or act as a platform, sharing revenue or royalties through partnerships. Even companies developing their own drugs often partner with pharmaceutical companies. This is because the cost of drug development is so high that these partnerships provide a source of non-dilutive capital.

However, this traditional model is starting to change. AI technology is expected to upend conventional deal structures. As designing a molecule becomes easier, the need for massive partnerships may decrease. This shift raises new questions about the nature of the business itself. When the models become so effective that they can generate a molecule ready for a patient, the line between selling software and selling molecules becomes very blurry.

A key capability of these new models is their ability to recognize what they do and do not know, similar to how ChatGPT might say it doesn't have an answer. This is a profound development for drug discovery. The model can essentially predict which lab experiments will succeed and which will fail. This allows researchers to concentrate their efforts on the opportunities that are most likely to work and matter in the end.

The model says, 'I know I can make a molecule that binds this part of the target. I'm not really sure about this other side of the target.' And that's actually a very profound capability because, again, it goes back to this point about simulating lab experiments. It's essentially the model telling you, 'I think that lab experiment will work and that one won't.'

AI is creating a new platform shift in biotech

1:21:47 - 1:28:33

When asked about the most promising drug molecule they are working on, the answer changes daily. The models for drug discovery are constantly improving, allowing for more ambitious targets. Jack notes that while creating one drug that saves one life would be worth it, the opportunity is much larger than any single molecule.

If I told you that our life's work was creating a machine learning model that created one drug that saved one person's life, it's probably would have been worth it. But I think the opportunity is going to be even much larger than that.

Even with a perfect drug-making machine, the process isn't as simple as turning it over to civilization in an open-source way. Josh explains that the model is only one component. The majority of the cost in drug development is in clinical trials and manufacturing. This infrastructure is resource-intensive and requires funding. Similar to pharmaceutical companies, there needs to be an acceptable profit margin to fund the pipeline for new medicines, as public funding alone would likely not be sufficient.

This technology arrives as the current biotech model feels like it's teetering, with the U.S. bearing a disproportionate R&D cost and blockbuster drugs becoming harder to find. Jack sees their work not as a small improvement but as a major platform shift, comparable to the web, mobile, or AI itself. This marks the beginning of a new S-curve for the industry.

When the technology stagnates, you see this with everybody's no longer hot on enterprise SaaS because the time to do one of those was 15 years ago... it's sort of the same in biotech as well.

This shift can lead to entirely new sets of molecules and modalities, capable of targeting diseases that were previously untreatable. Despite a recent downturn for the biotech industry, AI is a shining beacon. Jack believes this is the path to building fundamentally better medicines, a change that will play out not over years, but over decades and centuries.