Antibody therapeutics have transformed modern medicine, but for many scientists, developing new candidates still feels like searching for a needle in a haystack—a slow, expensive, and unpredictable process. Structural biology and high-throughput data generation are now collapsing that haystack, offering unprecedented visibility into the molecular handshake that drives life: protein-protein interactions.
In this episode from the Smart Biotech Scientist Podcast, David Brühlmann meets Troy Lionberger, Chief Business Officer at A-Alpha Bio, a biotechnology company harnessing synthetic biology and machine learning to measure, predict, and engineer protein-protein interactions.
As a biologist, I would tell you that ultimately life doesn't exist if proteins aren't on some level interacting with other proteins. So whether it's catalyzing force in your muscles or replicating DNA, proteins have to interact with other proteins to carry out all of the cell functions that are necessary for life. If there's ever cell dysfunction, it's oftentimes in some way, shape, or form tied back to some sort of protein–protein interaction that's both the origin of many disease states, but also the opportunity for therapeutic intervention.
David Brühlmann [00:00:35]:
Protein–protein interactions govern almost every biological process and hold the key to treating cancer, infectious diseases, and neurological disorders. Yet, with only 10,000 antibody–antigen structures in public databases, we're building tomorrow’s medicines on yesterday's data. Today, Troy Lionberger, Chief Business Officer at A-Alpha Bio, reveals how measuring millions of interactions simultaneously changes everything. By generating unprecedented quantities of high-quality data, they are accelerating the discovery of rare antibodies, engineering better protein therapeutics, and training AI models that predict what works before you ever touch the lab bench.
Let's explore how. Welcome, Troy, to the Smart Biotech Scientist. It’s good to have you on today.
Troy Lionberger [00:02:42]:
Thanks, David. It's a pleasure to be here.
David Brühlmann [00:02:43]:
Troy, share something that you believe about biotherapeutic development that most people disagree with.
Troy Lionberger [00:02:51]:
It's an interesting question. I think the most controversial view I harbor right now is—given my background—there is an overwhelming historical acceptance that antibody therapeutic development is artisanal and bespoke, that you're really hunting for needles in a haystack, if you will, as is often the common analogy.
I think the controversial statement I would make is that it’s far more systematic today than I ever imagined. For example, most people are surprised when I tell them that there are tractable and reproducible ways to make antibodies that have the same affinity for their human therapeutic target as animal targets. I mean, not just cross-reactive, but the same quantitative affinity, which could help streamline preclinical development, for example, of therapeutic antibodies. Most people I’ve spoken with seem to think that is a flight of fancy. Fundamentally, there are processes that make this happen today. That is the most surprising thing I share with people on a day-to-day basis.
David Brühlmann [00:03:52]:
And this will open avenues to novel therapeutics and also more efficacious drugs.
Troy Lionberger [00:03:59]:
That's right. I mean, preclinical development of antibodies is fundamentally constrained by the challenges in developing these therapeutic molecules. In large part, getting them to work with the animal models required in your studies is problematic. Oftentimes, the affinities of your molecules for those animal targets are far worse than for your human targets. So while you may have a drug that works quite well in humans, you can’t get it to the clinic because the animal might have toxicity issues, simply because you had to administer so much drug into it.
David Brühlmann [00:04:33]:
Before we dive deeper into today's topic, take us back to the beginning. What first sparked your interest in biotech, and how did that journey lead you to A-Alpha Bio?
Troy Lionberger [00:04:43]:
The origin for me was really in college, when a faculty member teaching structural biology started describing proteins as nanomachines. That visual has always stayed with me. It got me interested in science and wanting to understand how these fascinating machines, which operate with very different materials and properties than anything we’ve created as human beings, function and work.
That naturally led to understanding that these proteins, which we barely understand, are ultimately at the root of all human disease—leading to cell dysfunction, which we describe as disease states. It’s ultimately the understanding of these basic building blocks of life that drives biotechnology: figuring out how these proteins can be manipulated and controlled to elicit therapeutic effects.
To answer your question about how I ended up at A-Alpha Bio, my career in biotechnology started at a life science tools company called Berkeley Lights, where I helped invent an exciting technology to discover therapeutic antibodies. That experience naturally led to working with many teams in the industry to support their discovery efforts, and I became increasingly aware of the next major constraint: the preclinical development of those drugs. That is, in large part, the problem we are trying to solve at A-Alpha Bio right now.
David Brühlmann [00:06:05]:
When I looked at your website, what struck me is that your company, A-Alpha Bio, describes itself as a protein–protein interaction company. Why are these interactions so fundamental to drug discovery, and why are they also so difficult to characterize at scale?
Troy Lionberger [00:06:22]:
It's a great question. My background is in biology. As a biologist, I would tell you that ultimately life doesn't exist if proteins aren't on some level interacting with other proteins. So whether it's catalyzing force in your muscles or replicating DNA, proteins have to interact with other proteins to carry out all of the cell functions necessary for life. If there's ever cell dysfunction, it's oftentimes in some way, shape, or form tied back to some sort of protein–protein interaction.
That's both the origin of many disease states, but also the opportunity for therapeutic development.
Being able to characterize these protein–protein interactions—there are many technologies that have come forth to help do this. Surface plasmon resonance (SPR) is an industry-standard way to study protein–protein interactions, and we call this affinity. Understanding the strength of those interactions, or the affinity of those interactions, is ultimately how biophysical characterization describes these protein–protein interactions.
The problem historically is that, despite how advanced these technologies are, they are also quite costly and difficult to use. And when I say difficult, I don’t mean it’s impossible—people do this every day in labs all around the world. It’s just that if your goal is to make millions and millions of those measurements, it’s not a scalable technology.
A great example: to generate the amount of affinity measurements that come off just one of our experiments using SPR, it would take a few weeks. At A-Alpha Bio, the equivalent amount of affinity data would cost you between $1 and $500 million if done at a CRO. We do a million measurements at a time. That math illustrates the fundamental constraint in the industry. Despite increasing awareness that this volume of data is transformative for understanding biology, no one is going to pay $100 million for a weeks-long experiment.
So the constraint we’re trying to solve is making these data—otherwise far too expensive and too hard to generate—easier, more affordable, and more economical.
David Brühlmann [00:08:40]:
That's exciting, and that's definitely the way to go—to be able to screen a lot more and then find, quote-unquote, “needles in the haystack,” but for a much smaller, modest budget. If we just look at the general picture—because drug discovery has been done for decades—how do companies do this traditionally? What are the traditional workflows and methods? Let’s start with the basics.
Troy Lionberger [00:09:06]:
The branch of therapeutic discovery that I come from is called in vivo discovery. In in vivo discovery, you are typically relying on an animal model whose competent immune system is ultimately responding to the presence of an antigen that’s presented, raising an immune response against that antigen. On the discovery side, scientists will access those antibody-producing cells, identify the ones producing an antibody very specific to your disease target, and then sequence those antibodies to move forward in developing them into a drug.
There’s a complementary approach called in vitro discovery, where you use what are literally called panning methods. You can imagine gold miners panning for gold, which gives you an appreciation for the basic philosophy behind current discovery: needles in a haystack, mining for gold. In phage panning, you use bacteriophage to express a version of your therapeutic and access very large diversities—many different combinations of molecules. You expose these to your therapeutic targets, find those that bind, sequence them, and move them forward in the development process.
David Brühlmann [00:10:20]:
I imagine there are a lot of advantages to this traditional approach. Can you highlight what those advantages are, and also what the limitations are?
Troy Lionberger [00:10:27]:
The advantages of the in vivo approaches using animals are that you're taking advantage of really one of the world's most sophisticated ways of generating diverse sequences of antibodies, which is a competent immune system. To date, while there is promise in AI, AI has not been able to generate the diversity of functional antibodies that a competent immune system can. That is the promise for the future of in silico methods. But to date, hands down, one of the finest ways of generating a diverse antibody response is using an immune system.
The advantages are the diversity. The disadvantages are that, in many cases, you're not able to get human antibodies because it would be unethical to immunize human beings for the purpose of generating antibodies for therapeutics. So we’re limited to animal models that produce antibodies that then have to be further engineered to be compatible with human biology. We have developed humanized animal models to solve that problem, but these are expensive and not commonplace. That is the challenge there.
On the in vitro side, using phage panning, it's much faster. The downside is there are often biophysical characterization issues with those molecules. For example, we’re phage panning at room temperature, but antibodies have to survive body temperature. If they melt or denature at body temperature, that's a problem. So there are other liabilities with the in vitro technologies.
David Brühlmann [00:11:58]:
With the new technologies advancing very rapidly, what is the picture you're seeing? Are we going to have a side-by-side approach, or eventually will AI, machine learning, and so on take over?
Troy Lionberger [00:12:11]:
It's definitely top of mind for me personally, and I should be upfront and say this is the first time I've worked on the machine learning and AI side of the industry. I'm definitely new to the game. So with that caveat, I’ll just say I’ve mentioned in vivo, in vitro, and now, as you mentioned, in silico approaches, which are now complementing the first two antibody discovery approaches.
De novo antibody design is the name of the field that is essentially trying to predict sequences of antibodies that will bind efficaciously to a therapeutic target. Right now, I see all of these as complementing one another. As I said, there are advantages to in vivo and in vitro technologies. In silico approaches often take the output of those approaches as inputs to their models. They’re absolutely interlinked today.
I think the promise of in silico methods is to eventually amass enough data that you can generalize these models so that you don’t actually need a wet bench. But I would argue the constraint is always—even if you didn’t need data to train your models, you will need data to validate your outputs. There’s no escaping the data cycle in this space.
There’s a lot of talk in AI about AI models taking the lead. I think there are advantages to de novo design in terms of epitopic accessibility and creating next-generation molecules. But as it stands today, I would describe them as complementary.
David Brühlmann [00:13:43]:
And I imagine that with AI and machine learning, you’ll be able to accelerate the workflows. It's not one or the other, as I’m hearing, but it’s definitely making certain steps of the process faster and more efficient.
Troy Lionberger [00:13:58]:
That's absolutely right. And I’ll give you a great example—an area that A-Alpha Bio is heavily involved in right now: optimizing antibodies to be lead candidate molecules for preclinical teams. Typically, after the discovery of an antibody, you want to optimize that candidate. That could involve making it more human so that it doesn’t interact negatively with the human immune system when injected into patients. It could involve improving developability, like reducing the propensity for aggregation or increasing how much you can produce from cells at scale. But you’re also optimizing affinities.
Historically, this has been a very complicated process. You’d focus on driving affinities to where you want them, then worry about secondary characteristics of molecules that may affect manufacturability downstream. After making those changes, you’d have to go back to ensure your affinity hasn’t veered off course. It’s a slow, recursive, iterative process that’s expensive and time-consuming at each phase, often ping-ponging back and forth through various parts of the value chain. This could take over a year of hard work and significant investment.
What we’re doing now, leveraging disruptive data generation to inform machine learning models in a bespoke way, is transformative. We can generate datasets that train models to predict higher-affinity antibodies while simultaneously optimizing other characteristics. For example, we can insert up to 21 mutations into an antibody and still achieve greater than 50% accuracy in maintaining higher affinity than the parental antibody. That shows the flexibility now possible—where previously you could only add two or three mutations to achieve desired characteristics.
We can now predict sequences that are more human, more developable, higher expressing, higher affinity, and more stable—all at the same time—in a three-month process. This condenses what used to take potentially years of effort into just a few months.
David Brühlmann [00:16:25]:
So traditionally, drug discovery would take years—like two years, three years? What is the benchmark?
Troy Lionberger [00:16:33]:
It really depends. Having been in this space, it depends on the approaches taken and the complexity of the target. But in terms of lead optimization, the part of the process I’m referring to, that could traditionally take one to three years, which we are now compressing into a three-month process.
David Brühlmann [00:16:54]:
Wow, that’s massive. Yeah, that’s a big change.
Troy Lionberger [00:16:56]:
Yeah. And after that three-month process, we’ve also done things that historically couldn’t be done. For example, we can drive the affinities of preclinical animal model antibodies to match human target affinities. That’s where it gets really exciting—asking how this changes the dynamics of the overall ecosystem if everyone’s projects could be accelerated through preclinical development. These molecules are essentially tailor-made for the experiments planned in preclinical studies.
David Brühlmann [00:17:30]:
At the end of the day, we need to find effective antibodies—the purpose of the whole drug discovery process is to find the most efficacious antibody. Now what I’m hearing is that we have technologies to accelerate the workflow. My question is: do these technologies also enable finding that very effective antibody, or do we need additional technologies on top of that?
Troy Lionberger [00:17:59]:
No, I would argue that the problem I just described is not just about making a faster, more efficient process. We are also hitting the target product profiles required for these therapeutics. There’s no sacrificing quality by accelerating the timeline, which is, I think, a rare example. In this three-month process, you’re actually starting to guarantee results—something I never thought would be possible with a service provider.
David Brühlmann [00:18:28]:
Can you lead us into how you’re achieving that? What is the technology used, and what are the major steps?
Troy Lionberger [00:18:34]:
At a high level, here’s how it works: we generate tens of thousands of mutations of a parental antibody. We randomly insert arbitrary point mutations, making mutants with one, two, or three mutations at a time. We then measure the affinity of those tens of thousands of antibodies against a panel of different targets—not just the human target, but maybe mouse and cynomolgus targets, as well as point mutations of targets or comparable family members. This helps avoid early readouts of nonspecific binding.
We gather data not only on binding affinity and cross-reactivity, but also specificity from each of those tens of thousands of molecules. We then use all that data to fine-tune AlphaBind, our computational platform. The AlphaBind model has been trained on close to a billion different affinity measurements generated by the company. Fine-tuning the model with data from a specific parental antibody trains it to predict mutations that can be introduced into the original molecule.
The machine learning picks up on synergistic and compensatory mutations that might not be obvious to the human eye but are clear in the data. As a result, we can be greater than 90% confident in generating antibodies with higher affinity, even with up to 15 point mutations.
In parallel, we use off-the-shelf developability models to downselect molecules. While optimizing affinity, we simultaneously evaluate expression, solubility, and melting temperature to ensure the molecules are manufacturable.
David Brühlmann [00:20:33]:
So you generate millions of affinity measurements in a week. Are you using yeast display and genomics to do that, or what’s the trick behind it?
Troy Lionberger [00:20:43]:
Yeah, let me go a bit under the hood. Our wet-lab technology is called AlphaSeq. It’s a yeast display system that takes advantage of yeast mating pathways in nature. In yeast genetics, the A and α (alpha) strains have mating receptors that engage to form a diploid cell containing the genetics of both parent cells.
What our co-founder and CEO, David Younger, proved while in David Baker’s lab is that there’s nothing extremely specific about these mating receptors. You can attach molecules of interest to the outside of these cells, and if those molecules have measurable affinity, they will increase the rate of diploid formation—or yeast mating.
Here’s how it works in practice: you have a culture with a library of A strains expressing an antibody library and α strains expressing targets of interest. Under optimized culture conditions, mating occurs. After some time, you sequence everything. The number of times you see a specific gene pair correlates with binding affinity.
The diploid cells contain the genetics of both “mom” and “dad,” so each readout provides a genomic barcode corresponding to a specific antibody–antigen pair. You can quantitatively relate that to affinity in molarity, which correlates well (about 0.85) with standard measurements like SPR (surface plasmon resonance) or BLI (biolayer interferometry).
The trick here is extracting a biophysical measurement from a genomic readout. Next-generation sequencing provides the scalable data we need, and we can reliably relate it to binding affinities for millions of molecules simultaneously.
David Brühlmann [00:23:18]:
You have a good correlation. I’m curious—what is the affinity range you’re able to measure?
Troy Lionberger [00:23:25]:
Great question. The affinity range we usually see—though we can tune our culture conditions to adjust sensitivity—is typically from hundreds of picomolar up to tens of micromolar. So it’s a very large dynamic range that covers the therapeutic range most people are interested in.
David Brühlmann [00:23:53]:
That wraps up part one of our conversation with Troy Lionberger on the data revolution in antibody discovery. We’ve explored the limitations of traditional methods and how A-Alpha Bio’s AlphaSeq platform is changing the game.
In part two, we’ll discover how machine learning transforms this massive dataset into predictive power. If you’re finding value in this episode, please leave us a review on Apple Podcasts or your favorite platform—it helps other scientists like you discover the show.
All right, smart scientists, that’s all for today on the Smart Biotech Scientist Podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your preferred platform. By doing so, we can empower more scientists like you.
For additional bioprocessing tips, visit us at www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Troy Lionberger
Troy Lionberger serves as Chief Business Officer at A-Alpha Bio. He started his career as a research scientist after earning his PhD from the University of Michigan and completing postdoctoral training at UC Berkeley. During nearly a decade at Berkeley Lights, Troy held senior leadership positions spanning R&D, product strategy, and business development.
Before joining A-Alpha Bio, he was Chief Business Officer at Abbratech, where he guided the company from stealth into a partner-focused antibody discovery biotech. At A-Alpha Bio, Troy applies his strong technical background and experience scaling platform companies through strategic partnerships to help position the company as a key enabler in the industry.
Connect with Troy Lionberger on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
For generations, silkworm pupae were simply a byproduct of silk spinning. Today, the biotech spotlight is shifting to their dormant power: transforming these “waste” organisms into natural protein factories. Turns out, when silkworm pupae are harnessed as living bioreactors, they can produce complex recombinant proteins and vaccine antigens at a scale—and cost—that makes mammalian cell culture systems look cumbersome.
In this episode of the Smart Biotech Scientist Podcast, host David Brühlmann speaks with Masafumi Osawa, a global strategy leader at KAICO with an unconventional path into biotechnology. Originally trained in cultural anthropology, Masafumi’s career was shaped by hands-on experience in pharmaceutical business development across diverse markets.
Silkworms were once used purely for producing silk, and their pupae were often considered waste. Today, those same silkworm pupae have the potential to address major global health challenges and offer new modalities for vaccines and therapeutic proteins. If any researchers listening today are struggling with difficult protein expression—whether it's a VLP, membrane protein, or complex antigen—I would be very happy to explore how we can support your R&D.
David Brühlmann [00:00:36]:
Welcome back to Part Two with Masafumi Osawa from KAICO. In Part One, we explored how silkworm pupae function as natural bioreactors expressing complex proteins using a baculovirus expression system. Now we’re moving from platform science to product reality. KAICO isn’t just offering contract services—they’re developing injectable and oral vaccines for both human and animal health. We’ll examine their development pipeline, discuss the unique regulatory considerations when your bioreactor is alive, and explore where silkworm-based manufacturing fits into the future of biologics production. Let’s continue our conversation.
Let’s shift gear, Masa. Let’s focus on your product pipeline. What are the kind of molecules you're developing? How well advanced are you in these various molecules? I'm curious to see what you're working on right now.
Masafumi Osawa [00:02:50]:
Our pipeline is designed around a stepwise regulatory strategy, starting with segments that allow for rapid entry and progressing toward human applications after livestock validation. Our most advanced program is the PCV2 oral immunization product for pigs, registered in Vietnam as a functional feed additive. In real farm environments, it has shown performance comparable to injectable vaccines while substantially reducing labor and stress costs. This provides strong validation of oral antigen delivery using silkworm-derived proteins in livestock.
We are also developing oral vaccines for cats, including FPV (feline panleukopenia virus), FCV (feline calicivirus), and FHV-1 (feline herpesvirus type 1), and producing purified CPV (canine parvovirus) antigens for dogs. Our aim is to reduce vaccination stress in animals and offer alternatives to in-clinic injections, with implications for human health as well.
Our recombinant injectable human norovirus vaccine is preparing for Phase I clinical trials next summer under Japan’s AMED SCARDA program. This marks a major step toward establishing insect-based platforms in human pharmaceuticals. Beyond internal programs, we collaborate with partners to express complex proteins and antibodies, leveraging the unique capabilities of the silkworm system. Overall, our pipeline reflects a long-term progression from livestock to companion animals to human injectables and eventually oral medical vaccines for humans.
David Brühlmann [00:04:39]:
Oral vaccines are an exciting delivery approach because they reduce distress during administration. Are there specific antigen characteristics that work better—or worse—for oral vaccine applications, and where are the limits?
Masafumi Osawa [00:04:57]:
Silkworm-derived antigens offer significant advantages for both injectable and oral vaccines, although the reasons differ by modality. For injectable vaccines, the key strength lies in the ability to express antigens that are difficult or sometimes impossible to produce in other systems. This includes complex structural proteins and virus-like particles (VLPs), which often do not require mammalian-type N-linked glycosylation and therefore assemble particularly well in insect-based platforms.
The silkworm pupal environment provides a dense, multicellular physiological setting that naturally supports proper folding, multimerization, and high-yield expression. This is why we can obtain enough purified antigen from a single pupa to immunize several hundred pigs. When manufacturing injectable swine vaccines, this level of efficiency is extremely difficult to match with conventional cell culture systems.
For oral vaccines, the advantages are even more distinct. Previous plant-based oral vaccine approaches—such as rice or other edible crops—have struggled due to low expression levels or sharply increasing production costs at industrial scale. As a result, despite scientific interest, few plant-derived oral vaccines have reached commercial feasibility.
Silkworm pupae, however, function as naturally concentrated bioreactors, delivering expression levels far higher than those typically achieved in plants. At the same time, silkworms can be mass-produced at low cost, making them well suited for oral vaccine applications where dosage volumes are much larger than injectables. In our PCV2 oral program, for example, we formulate approximately 1.5 g per pig, accounting for variation in feed intake and ensuring sufficient mucosal exposure. By contrast, injectable formulations—with higher purity and potency—allow a single pupa to cover hundreds of animals.
Additionally, silkworm-based production avoids large bioreactors, extensive culture media, sterile water systems, and intensive cleaning operations. These factors significantly reduce manufacturing costs and environmental burden, giving our platform a strong economic advantage—particularly in vaccine applications where global accessibility and scalability are critical.
David Brühlmann [00:07:49]:
During the COVID pandemic, we heard over and over how important it is to be able to develop vaccines very quickly. I’m curious—using the silkworm platform, how fast can you develop a new vaccine? Is this comparable to, for instance, mRNA, or how does it differ?
Masafumi Osawa [00:08:08]:
As long as the DNA sequence is available, we can produce any kind of recombinant protein. That’s the first point. One example is that during COVID we also produced SARS-CoV-2 recombinant spike protein. It took just three months after the outbreak of COVID-19. I think that is one remarkable aspect of our platform.
David Brühlmann [00:08:32]:
That’s remarkable—very fast. And the advantage I see with your platform is that because you’re scaling out, it’s very easy to expand production rapidly and produce massive amounts of vaccines in a short time. On our podcast we usually focus on human medicine, so I’d like to take a quick deep dive into the animal side of things because that’s also very important. We often forget that there’s a huge market for animal health. What are the main regulatory differences between the two? Can you give us the two-minute version? Obviously, I'm sure there's a lot of more details, but what is the two minute version of the differences between human and animal health?
Masafumi Osawa [00:09:14]:
Regulatory pathways for silkworm-derived products differ significantly between human and animal health. Human vaccines follow globally harmonized standards, but because silkworm-derived antigens are unprecedented, we must work closely with Japan’s PMDA (Pharmaceuticals and Medical Devices Agency) to define raw material controls, GMP, CMC expectations, and quality control frameworks.
In the animal health sector, pathways differ by country. In Vietnam, our PCV2 product was registered as a functional feed additive rather than as a pharmaceutical, enabling rapid market entry. Companion animal vaccines follow pharmaceutical regulatory frameworks, but typically with shorter timelines than human vaccines. These differences allow us to pursue a staged development strategy—starting with faster, more accessible applications, generating real-world validation, and gradually advancing toward more tightly regulated markets.
David Brühlmann [00:10:18]:
Okay, so it’s country-dependent. And as I recall, there are also major differences between human health and animal health regulations?
Masafumi Osawa [00:10:27]:
Yes. However, for companion animals, regulatory standards are often harmonized through VICH (International Cooperation on Harmonisation of Technical Requirements for Registration of Veterinary Medicinal Products). For products like our PCV2 immune-enhancing feed additive, since there is no comparable product globally, the regulatory classification depends on how local authorities decide to position it.
David Brühlmann [00:10:55]:
Let’s look ahead. You’re still a relatively young company. What does the future hold? If we look at therapeutic areas, product types, or services, what are your next steps?
Masafumi Osawa [00:11:10]:
While vaccines represent our core focus, the silkworm system has broader therapeutic potential. Many antibodies and complex recombinant proteins are difficult to express in standard systems due to folding challenges or instability. Silkworm pupae, with their diverse cell types and chaperone-rich environment, offer an alternative solution for these difficult targets. Sustainability is another strong feature of silkworm biomanufacturing. Because the system requires minimal water, no bioreactors, and low energy inputs, the overall environmental footprint is significantly lower than conventional platforms. This opens possibilities for distributed manufacturing models where production can occur closer to the end user, including in emerging regions with limited infrastructure. In the long term, the flexibility of small-batch production means that silkworms may contribute to personalized biologics or rare disease therapeutics.
David Brühlmann [00:12:16]:
When do you think we’ll see the first biologic approved that was produced in silkworms? Do you have a sense of timing—five years, ten years?
Masafumi Osawa [00:12:27]:
That’s very difficult to answer because we are about to enter a Phase I clinical study next year—next summer, to be precise. I don’t know how many more years it will take, but it’s becoming very real. In the past, no one believed that a live silkworm body could be a source of APIs or vaccine antigens, but now it’s becoming a reality. Entering a Phase I clinical study means that products derived directly from silkworms are about to be administered to humans. So I cannot give a timeline, but this is a major step.
David Brühlmann [00:13:10]:
That’s a major milestone and shows that you’ve done the homework and achieved initial regulatory acceptance. Obviously, there’s still a lot ahead, but entering Phase I trials shows that regulatory bodies see the potential and trust the technology.
Masafumi Osawa [00:13:32]:
Yes. Regulatory authorities now accept our quality control strategy and how we manage consistency and safety, which opens the door to further pharmaceutical development opportunities.
David Brühlmann [00:13:45]:
Absolutely. You’ve demonstrated proof of concept, and once that foundation is laid, you can build on it. That’s wonderful. Before we wrap up, Masa, is there any burning question I haven’t asked that you’d like to share with our biotech community?
Masafumi Osawa [00:14:04]:
One thing I may not have mentioned is the production volume of a single silkworm pupa. Productivity is another strong advantage. One silkworm pupa, about 2 to 3 centimeters in size, can express 10 to 20 milligrams of norovirus virus-like particles (VLPs). After purification, this typically yields 1 to 2 milligrams per pupa, which is still a substantial amount. This is why scaling out is such a strong advantage of our platform. If we need 100 milligrams of product, we simply require 100 silkworm pupae. The total space needed is about the size of a laptop. Compared to large-scale manufacturing equipment, this is extremely compact, making it a key benefit of our platform.
David Brühlmann [00:15:09]:
As we wrap up, Masa, what is the most important takeaway from our conversation?
Masafumi Osawa [00:15:15]:
Silkworms were once used purely for silk production, and their pupae were often considered waste. Today, those same pupae have the potential to address major global health challenges and offer new modalities for vaccines and therapeutic proteins. If any researchers listening are struggling with difficult protein expression—whether VLPs, membrane proteins, or complex antigens—I would be very happy to explore how we can support your R&D.
David Brühlmann [00:15:49]:
This has been great, Masa. Thank you for sharing your work and for helping democratize life-saving therapies by pushing boundaries beyond what many people think is possible. Where can people connect with you?
Masafumi Osawa [00:16:08]:
Please feel free to connect with me on LinkedIn: Masafumi Osawa.
David Brühlmann [00:16:17]:
There you have it, Smart Biotech Scientists. Please reach out to Masa and learn more about the technology. Once again, thank you very much for being on the show today.
Masafumi Osawa [00:16:28]:
Thank you very much for having me, David. It was my pleasure.
David Brühlmann [00:16:33]:
Thank you for joining us for this deep dive into silkworm-based biomanufacturing with Masafumi Osawa. From ancient silk production to modern vaccine development, KAICO is proving that nature still has lessons to teach us about efficient bioprocessing. If this episode expanded your thinking about alternative expression platforms, please leave a review on Apple Podcasts or your favorite platform. Your feedback helps us bring you more cutting-edge biotech insights. Thank you so much for tuning in today and I'll see you next time.
All right, smart scientists, that's all for today on the Smart Biotech Scientist Podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite podcast platform. By doing so, we can empower more scientists like you. For additional bioprocessing tips, Visit us at www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let's continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Masafumi Osawa
Masafumi Osawa brings more than a decade of experience in the pharmaceutical sector, with a strong focus on driving innovation to address global health needs. He is the Business Development Lead at KAICO Ltd., a Japan-based biotechnology start-up specializing in recombinant protein production using silkworms as biological reactors.
At KAICO, he leads strategic partnership development, represents the company at industry events and technical forums, and applies his strengths in market analysis, CRM, and communications to clearly articulate KAICO’s vision and promote its distinctive technologies, including oral vaccine platforms for both human and veterinary use.
Connect with Masafumi Osawa on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
For centuries, silkworms have spun threads that bound empires and launched markets. Today, the quiet revolution brewing at KAICO is transforming these creatures from textile icons into potent bioreactors, with the potential to rewrite the rules of recombinant protein production.
In this episode from the Smart Biotech Scientist Podcast, David Brühlmann meets Masafumi Osawa. Trained in cultural anthropology—and shaped by frontline experiences in pharmaceutical business development—Masafumi now leads global strategy at KAICO.
His journey, from observing healthcare access disparities in Indonesia to championing silkworm-based biomanufacturing, brings a fresh perspective that’s bridging science, business, and public health in unexpected ways.
Right now we are working on developing a human norovirus vaccine which is about to enter into a Phase I clinical study next year. So our local authority also asked us how does KAICO manage the quality of the silkworm? So one answer is that we use SPF-grade parent brood (PB) from specialized cell culture facilities, strictly control diet quality, rearing conditions, environmental monitoring, and breeder documentation.
So we collaborate closely with the PMDA, Japan’s Pharmaceuticals and Medical Devices Agency, to establish the acceptance criteria and ensure alignment with pharmaceutical expectations in terms of the differences of each silkworm PB.
David Brühlmann [00:00:50]:
For over 4,000 years, silkworms have spun silk that would eventually connect civilizations and through ancient trade routes. But what if these creatures could do more than weave fabric? What if they could manufacture life-saving biologics? Today’s guest, Masafumi Osawa from KAICO, is pioneering exactly that transformation. His team has developed the silkworm–baculovirus protein expression platform that turns these living organisms into natural bioreactors. Join us as we explore how this technology is evolving from university research to GMP manufacturing reality and what it means for the future of protein production.
Welcome Masafumi, to the Smart Biotech Scientist. It's good to have you on today.
Masafumi Osawa [00:02:51]:
Thank you very much for having me today, David. I'm truly excited to be here.
David Brühlmann [00:02:56]:
Masafumi, share something that you believe about bioprocess development that most people disagree with.
Masafumi Osawa [00:03:04]:
Many people assume a living organism can't be a reliable or consistent biomanufacturing unit, especially when it comes to pharmaceutical-grade materials. But based on our work, I believe the opposite is true. A silkworm can be a highly powerful and surprisingly consistent bioreactor, capable of producing a wide range of recombinant proteins, including vaccine antigens. There has never been a human medicine produced directly from a live silkworm. That's precisely why people question the quality and safety. Yet our experience is showing that silkworm-derived proteins can meet modern pharmaceutical expectations. And with the right controls, this platform can create new possibilities for biomanufacturing.
David Brühlmann [00:03:53]:
I'm looking forward to our conversation, Masafumi, to unpack this and to understand how you produce pharmaceuticals in silkworms. Before we do that, let's talk about yourself. Draw us into your story—what sparked your interest in biotech, and what were some pivotal moments that led you to your current role?
Masafumi Osawa [00:04:15]:
Unlike most of the listeners of Smart Biotech Scientist, my academic background is actually in cultural anthropology, not molecular biology or biochemistry. So during my university studies, I conducted fieldwork on Indonesian society as part of my research. Indonesia is a country rich in cultural diversity, but through my study I also witnessed large disparities in access to healthcare, limited access to clean water, financial barriers to basic medicines, and gaps in essential health services. These experiences made me want to contribute to global health in a more structured way, and this led me to join Towa Pharmaceutical, a Japanese company specializing in orally disintegrating tablets. I began my career as a medical representative and later moved into international business development. I conducted market research in Taiwan and Mongolia, identified product–market fit, and supported regulatory strategies for each region. It was fulfilling to watch new products reach patients and see their real-world impact.
But during the COVID-19 pandemic, coincidentally around the time my child was born, I began reevaluating my career path. I appreciate the importance of generics, but I also realized that generics only exist because someone first innovates and pushes the boundaries of drug development. I felt drawn toward innovation, toward work that might genuinely shift the trajectory of public health. Around that time, JEPRO was strengthening its focus on domestic vaccine development. That was when I discovered KAICO and the silkworm–baculovirus protein expression platform. While my first reaction was a mixture of shock, fascination, and respect, the idea that a silkworm—a small and fragile creature domesticated for thousands of years purely for silk—could produce complex recombinant proteins normally requiring expensive bioreactors was astonishing. I remember thinking, if this organism can produce such complex molecules, it could change the way we address disease.
However, when I entered KAICO, I faced an immediate challenge. I had only worked with small-molecule drugs and needed to learn the fundamentals of proteins, expression systems, and the differences between manufacturing platforms. Thankfully, with 90% of KAICO's employees coming from technical backgrounds, I was surrounded by researchers who generously supported my learning. This environment helped me rapidly bridge the gap, and interestingly, my anthropology background became a strength rather than a mismatch. Understanding how different societies collaborate, how decisions are made in different cultural contexts, and how technologies are adapted across regions became extremely valuable.
KAICO now works actively with international partners in Vietnam, Thailand, and Europe, and my ability to navigate cross-cultural communication has become central to my role. And today, as Business Development Lead, I introduce KAICO's platform globally, support partners working with complex protein targets, promote our first immune-enhancing feed additive product for pigs, and build co-development alliances not only for vaccines, but also for broader protein-based R&D programs. Looking back, joining KAICO was a natural extension of my original interest—connecting people, bridging cultures, and contributing to public health, this time through biotechnology.
David Brühlmann [00:08:11]:
I love listening and discovering your story, and I love seeing that it's not linear. You started at one end and now you ended up in biotech. How fascinating is that? And also what resonated with me is when you were saying you had a non-biotech background, but actually this very experience from your studies is a huge advantage. I love that. So tell us a bit more about how KAICO started as a university spin-off and what the vision is behind your silkworm platform.
Masafumi Osawa [00:08:45]:
Okay, so KAICO was founded at Kyushu University, one of Japan's leading institutions for entomology, with over 100 years of history and more than 450 unique silkworm strains. Our CEO, Mr. Yamato, encountered the silkworm–baculovirus expression system while studying in an MBA program. He was searching for dormant academic technologies with commercial potential, and when he discovered this platform, he recognized its significance immediately. Together with Professor Kusakabe, the principal scientist behind the system, they founded KAICO in 2018.
The company initially focused on two goals: producing recombinant research reagents derived from silkworms and collaborating with pharmaceutical companies to develop recombinant vaccine antigens and APIs. But beyond those objectives was a broader vision. If silkworm pupae could reliably express complex proteins at high yield, they could transform the landscape of biologics manufacturing. Making that vision a reality became KAICO's mission—changing the world with silkworms.
David Brühlmann [00:10:09]:
So let's look at the silkworm more specifically. How do you inject a recombinant baculovirus into a silkworm, and how do you quote-unquote culture your silkworms? Do you just let them grow and eat mulberry leaves, or are they swimming in a bioreactor, or how does that work?
Masafumi Osawa [00:10:28]:
Let me walk you through the basics. So the silkworm system works through a straightforward but powerful mechanism. First, we design the DNA sequence for the protein of interest. This sequence is inserted into a baculovirus vector that infects only silkworms. Once the recombinant virus is ready, we inject a small amount into the silkworm pupa. Over the next four to five days, the virus spreads throughout the pupa, infecting its cells. Each infected cell begins producing the target protein based on the inserted gene.
During the metamorphosis stage, the pupa becomes a highly active biological environment with diverse cell types, abundant molecular chaperones, and physiological conditions that support the correct folding and assembly of complex proteins. So practically speaking, the entire pupa functions as a compact, pupal cell–contained bioreactor with extremely high cellular density.
David Brühlmann [00:11:36]:
And these worms, where do you keep them? Are they in a container or where do they live?
Masafumi Osawa [00:11:44]:
So we purchase all the silkworm cocoons from local farmers or certified manufacturers. Inside each cocoon there is a pupa. We first cut open the cocoon, remove the pupae, and place them in containers, which are then stored in a refrigerator. In the refrigerator, they go into hibernation, and we can keep them for up to one month, or sometimes up to two months, before injecting the baculovirus.
David Brühlmann [00:12:14]:
And how long does the quote-unquote production process last? Is this a few days or weeks until you harvest?
Masafumi Osawa [00:12:22]:
So after inoculating the baculovirus, it takes just four to five days until the target protein is fully expressed inside the body of the pupa. After that, we purify the protein to obtain the reagent. So overall, it can take one to two months if we already have the right construct for the target protein.
David Brühlmann [00:12:41]:
And what I heard is that since the virus infects the entire worm, all different cells express your protein of interest. Is that correct?
Masafumi Osawa [00:12:51]:
Yes, you're correct.
David Brühlmann [00:12:52]:
Okay, let's compare this now to more traditional platforms such as E. coli or mammalian cells, for instance, or conventional insect cell cultures. What are, I'd say, the key advantages of your system, or perhaps also some drawbacks versus the other systems?
Masafumi Osawa [00:13:10]:
So when comparing silkworms to other expression systems, several differences stand out. Compared to E. coli and yeast, silkworms offer more natural folding and more mammalian-like post-translational modifications. These characteristics are especially important for structural proteins and multi-subunit complexes.
Compared to insect cell lines like Sf9 or Hi5, silkworms often provide higher yields, better folding integrity, and dramatically simpler scale-out production. Insect cell lines require large bioreactors, expensive media, and extensive facility infrastructure. Silkworms require none of those. And compared to CHO cells, the gold standard for therapeutic production, silkworms avoid the need for costly media, large-scale tanks, and significant water consumption.
So silkworm-based production follows a fundamentally different philosophy. Instead of scaling up by building larger tanks, we scale out simply by increasing the number of pupae.
David Brühlmann [00:14:19]:
And I guess because you're scaling out and not up, you can much more quickly adapt to different demands, right? Because you're much more flexible. Now, something that comes to my mind—and also that resonates with your first statement about what you think is different from perhaps other people in our field—there are a lot of advantages to using a living organism, as you said. Definitely the cost is much lower. That's one of them. And the drawback, or potential drawback, that comes to my mind is how do you manage the variability? Because if you, for instance, have a CHO cell line, that's a clonal cell line, so it's always the same clone. But I guess in your system you have some genetic variability between one worm and another. How do you manage this?
Masafumi Osawa [00:15:08]:
Thank you very much. That's a very important question. Actually, right now we are working on developing a human norovirus vaccine, which is about to enter Phase I clinical study next year. So our local authority also asked us how does KAICO manage the quality of the silkworms. One answer is that we use SPF-grade parent brood (PB) from specialized sericulture facilities, strictly control diet quality, rearing conditions, environmental monitoring, and breeder documentation. We also collaborate closely with the PMDA, the Japanese Pharmaceuticals and Medical Devices Agency, to establish acceptance criteria and ensure alignment with pharmaceutical expectations in terms of variability among individual silkworm PB.
David Brühlmann [00:16:00]:
And by doing this you can manage the variability. So you can make sure that from one batch to another you get the same product at the end of the day. Because in biologics we say the process is the product. So I imagine that in your system this is true as well.
Masafumi Osawa [00:16:17]:
Yes, you're right.
David Brühlmann [00:16:18]:
Now, I've read on your website that you describe your silkworm pupae as equivalent to about 100 to about 1,000 milliliters of insect cell culture. Can you tell us a bit more about how you came up with these numbers, and what that means for the process economics? Does that mean that you can produce a lot more volume or more product on a smaller footprint?
Masafumi Osawa [00:16:42]:
So this number is not just an estimate; it comes from a published comparative study titled Comparison of recombinant protein expression in a baculovirus system in insect cells and silkworms. In that study, 45 different recombinant proteins were expressed in Sf9 cells, silkworm larvae, and silkworm pupae. When expression levels were normalized, the researchers found that a single pupa yields, on average, the equivalent recombinant protein amount produced by approximately 120 mL of Sf9 culture, with some proteins reaching much higher equivalencies. This is where the 100–1,000 mL per pupa framework originates.
From a production economics perspective, this has important implications. As you know, conventional recombinant protein production requires large bioreactors, sterilized media, and massive amounts of water, followed by extensive cleaning steps. These processes contribute significantly to environmental footprint and operating cost. In contrast, silkworm pupae function as self-contained biological culture vessels. They require no bioreactors, no large volumes of water, and no cleaning validation. The physiological environment is preassembled by nature, eliminating significant upstream costs.
For developers like us, this also means that scaling is far easier. Instead of scaling up by building larger tanks, which adds engineering risk, you simply scale out by increasing the number of pupae, just as I mentioned earlier. This reduces infrastructure burden and supports long-term cost efficiency. This advantage will make it easier to offer stable pricing and a consistent global supply.
David Brühlmann [00:18:48]:
Now I'm curious, Masa, how do you do the downstream processes? Because once you finish your production run, you have these worms that have expressed a certain amount of protein, and you mentioned that then you do the harvesting. So how does that work? And then how does the purification work? Is purification very close to a traditional purification process we see with E. coli, for instance, or with yeast? Or are there some major differences?
Masafumi Osawa [00:19:16]:
The downstream purification process is similar to conventional protein expression systems.
David Brühlmann [00:19:22]:
And how do you get the protein out of your worms? Is that similar to what you would do with E. coli, for instance? How do you quote-unquote harvest your worms?
Masafumi Osawa [00:19:31]:
So after the target protein is fully expressed inside the body of the pupa, we homogenize the whole pupa with buffer, then ultracentrifuge the extract and apply standard chromatography steps to purify the protein. So basically, the system is partly similar to the traditional approach.
David Brühlmann [00:19:54]:
Yeah, I see. And that's also where I imagine you start applying cleanroom and closed-process conditions to make sure that at the end of the day your product is sterile and safe to use. Correct?
Masafumi Osawa [00:20:08]:
Yes, you're correct.
David Brühlmann [00:20:09]:
I'd like to touch upon the quality side of things, because you mentioned glycosylation, which is an important part, especially for more complex molecules. Where do you see the limits of your platform versus CHO? Are there certain molecules that are too complex to produce in worms, or do you think you can produce pretty much any kind of molecule?
Masafumi Osawa [00:20:32]:
We can produce pretty much any kind of molecule. So far, one of our strongest technical advantages is our consistent expression success. Across more than 130 protein expression projects—many of them challenging targets—we have observed successful expression in every case. This includes membrane proteins, intrinsically disordered proteins, allergens, multi-subunit proteins, large virus-like particles, and even certain GPCRs. Many partners approach us after unsuccessful attempts in E. coli or mammalian systems. The silkworm pupal physiological environment provides favorable conditions that artificial bioreactors struggle to replicate.
David Brühlmann [00:21:23]:
As you're interacting now with health authorities, I imagine you have some very interesting conversations with them, as this is a novel host and a novel way to produce pharmaceuticals. What are the unique GMP and regulatory challenges you have encountered so far with your living organism?
Masafumi Osawa [00:21:43]:
So because silkworms are living organisms, GMP considerations focus heavily on raw material controls. This overlaps with what I mentioned earlier. We use SPF-grade parent brood, and there are dedicated facilities that supply only SPF-grade, pharmaceutical-grade silkworms. These facilities strictly control diet, rearing conditions, environmental monitoring, and breeder documentation. So how to monitor safety and quality at this level is something that makes our discussions with local authorities quite unique.
David Brühlmann [00:22:27]:
That's it for Part One. We have explored how KAICO emerged from academic research and how silkworm pupae function as remarkably efficient bioreactors. Next time, we'll dive into production economics, post-translational modifications, and KAICO's vaccine pipeline. If you are finding value in these conversations, please leave a review on Apple Podcasts or your preferred platform. It helps other biotech scientists like you discover these practical insights.
All right, smart scientists—that's all for today on the Smart Biotech Scientist Podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite podcast platform. By doing so, we can empower more scientists like you. For additional bioprocessing tips, visit us at www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let's continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Masafumi Osawa
Masafumi Osawa has over 10 years of experience in the pharmaceutical industry and is dedicated to advancing innovative solutions for global health challenges. He serves as Business Development Lead at KAICO Ltd., a Japanese biotechnology start-up that develops and produces recombinant proteins using silkworms as bioreactors.
In this role, he identifies and establishes strategic partnerships, represents the company at trade events and workshops, and leverages his expertise in market research, CRM, and public relations to effectively communicate KAICO’s vision and showcase its unique technologies, including oral vaccines for humans and animals.
Connect with Masafumi Osawa on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
Ever watched judges' faces light up during your pitch ? Neither had I – until that competition day when everything changed. The stakes were high.
Ten teams, seven minutes each, and most presenters drowning their innovations in data tsunamis while executives checked emails. And we all wonted the 5-figure prize money.
You're nodding because you've been there. That knot in your stomach before presenting? The fear that your brilliant science will get lost in translation? The voice whispering, "Just show the data and get off stage"? I've felt that too. It's like being fluent in a language nobody else in the room speaks.
When our turn came, we didn't start with methods or specifications. Instead, we told a story about frustrated scientists, failed batches, and patients waiting. The atmosphere shifted instantly. Phones went down. Questions became strategic. We won first prize not because our science was superior, but because our story made our impact unforgettable.
In the next ten minutes, I'll reveal exactly how we did it – and how you can do the same. Let's begin.
This concept is discussed in greater detail in the Smart Biotech Scientist Podcast, hosted by David Brühlmann, founder of Brühlmann Consulting.
Every compelling story follows a three-act structure that's been powerful since ancient times. This isn't just artistic tradition – it's how our brains naturally process information.
Let me show you this structure in action with perhaps the greatest product launch of all time: Steve Jobs unveiling the iPhone in 2007.
Jobs began with Act 1 – setting the stage: "This is the day I've been looking forward to for two and a half years." He established anticipation and context by reminding us of Apple's history of revolutionary products – the Macintosh that changed the computer industry and the iPod that transformed music.
Then came Act 2 – the problem. Jobs implied the problem existed in the fragmented, confusing world of separate devices that consumers struggled with. This created tension the audience wanted resolved.
For Act 3 – the resolution – Jobs delivered that unforgettable moment: "We are introducing three revolutionary products... a widescreen iPod with touch controls, a revolutionary mobile phone, and a breakthrough Internet communications device." He paused, repeated the list, then delivered the punchline: "Are you getting it? These are not three separate devices. This is one device. And we are calling it iPhone."
Notice what Jobs didn't do. He didn't start with technical specifications. He didn't begin with the development process. He created a narrative that built tension and then resolved it brilliantly.
For scientific presentations, this translates beautifully. Instead of diving into methods and technical details first, start with the human impact. Who benefits from your work and how? Then clearly define the problem or unmet need – the villain of your story. Finally, present your solution as the transformative hero, supported by focused, relevant data.
This structure works because it follows our brain's natural information-processing patterns. It creates tension that seeks resolution. It builds from context to solution, not solution to context. And it positions your data as support for a compelling narrative, not as the narrative itself.
Think about it – what's the last presentation that truly captivated you? I'd bet it followed this structure, perhaps unconsciously. The presenter likely started with why the work matters before explaining how it works.
Building on this three-act foundation, Donald Miller's StoryBrand framework provides a powerful seven-element structure that's particularly effective for scientific communication. I've been using this framework successfully for many years now in keynotes, pitches, technical presentations, and more – it consistently helps transform complex scientific concepts into compelling narratives that resonate with diverse audiences.
First, we have a Character – the patient, healthcare system, or company facing limitation. They encounter a Problem – the current technical, medical, or business limitation that's causing pain. Then they meet a Guide – your innovation or approach (not you personally). This Guide gives them a Plan – how your solution works (the science, simplified). The Guide then Calls them to Action – the decision or support you need. This helps them Avoid Failure – consequences of maintaining the status quo. And finally, it Ends in Success – a data-supported vision of improved outcomes.
Notice how this differs from conventional scientific presentations. Traditional presentations often position the researcher as the hero overcoming obstacles. But in effective storytelling, your audience is the hero. Your innovation is merely the guide helping them succeed. This subtle shift makes your presentation instantly more engaging because it centers their needs, not your accomplishments.
This isn't about manipulating your audience. It's about respecting how human minds process information. Even the most analytical brain responds better to structured narrative than to random data points.
Now, let's get practical. What about those high-stakes situations when you have just 3–5 minutes with decision-makers? This is where the Minimal Viable Pitch becomes essential.
This streamlined approach uses the same StoryBrand elements we just discussed, but boils everything down to the strict minimum. The goal is to be simple but not simplistic – it's a fine line.
You could go as extreme as just putting one sentence for each element:
Character: "Commercial manufacturing engineers struggle with batch failures costing $2M monthly."
Problem: "Current monitoring systems can't detect critical quality shifts until it's too late."
Guide: "Our real-time PAT platform uses novel spectroscopy to detect changes 4 hours earlier."
Plan: "Integration takes just four weeks with our plug-and-play system."
Call to Action: "Approve the $100K pilot in Plant 3 next quarter."
Failure Avoidance: "Without this, we'll continue losing 30% of batches to quality deviations."
Success: "With implementation, batch failures drop by 70%, saving $1.4M monthly."
The key principles here are starting with the end in mind – what decision do you need? One slide should equal one key message. Your data should support your narrative, not be your narrative. And technical details belong in appendix slides or follow-up materials.
This isn't about oversimplifying complex science. It's about prioritizing what matters most to your specific audience in your limited time slot.
I know what you're thinking. "But my topic is too complex for storytelling." Actually, more complex topics need stronger narratives, not weaker ones. Richard Feynman, Nobel laureate physicist, explained quantum mechanics through stories about spinning tops and everyday objects. He didn't simplify the science; he made it accessible.
Or perhaps you're thinking, "My boss expects technical presentations." That's a common challenge. The solution? Layer technical details within a narrative framework. Use appendix slides for deep dives after establishing relevance. Often, leadership appreciates this approach because it makes their decision-making process clearer.
Short on time? Start with just the opening two minutes – hook them first. Try this template: "Currently, [stakeholders] are struggling with [problem], costing [consequence]. Our [solution] addresses this by [approach], resulting in [benefit]." That opener alone can transform how your audience receives everything that follows.
How do you know if your scientific storytelling is working? Look for engaged body language during presentations. Notice if questions focus on implications and next steps rather than basic clarifications. Pay attention to whether people accurately relay your key points to others. And track if you're invited to present to broader audiences.
But the real measure is simple: did you move your audience to action?
Remember, brilliant science that no one understands remains just unrealized potential. Your ideas deserve better. By structuring them as compelling stories, you're not compromising scientific integrity – you're ensuring your science has the impact it deserves.
You're staring at your laptop screen, aren't you? Rehearsing that upcoming presentation in your mind, wondering if your slides have too much data or too little context. Maybe you're thinking, "I just need to show my results – that's what matters."
I understand completely. When you're racing against deadlines, mastering storytelling feels like one more impossible task on your already overflowing plate.
But remember what science teaches us: when we present information as a story, our audience's brains release dopamine that improves focus and memory. They produce oxytocin that builds trust and connection. They experience endorphins that create positive associations with your ideas.
This isn't just presentation theory — it's neuroscience. Your brain is literally wired to respond to stories. And so are the brains of every decision-maker, investor, and colleague you present to. You've spent years becoming an expert in your field. You've designed elegant experiments and solved complex problems. Telling your story effectively isn't betraying your science – it's ensuring it reaches the people who need it most.
Your research deserves more than a footnote in a journal. It deserves to change lives. And now you know exactly how to make that happen.
Need help with an upcoming presentation? Book a free 20-minute consultation. We'll help you get started crafting a compelling scientific story that resonates with your audience - whether it's for your next team update, executive briefing, or investor pitch. No obligation, just practical guidance to make your next presentation unforgettable.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
🧬 Stop second-guessing your CMC strategy. Our fast-track CMC roadmap assessment identifies critical gaps that could derail your timelines and gives you the clarity to build a submission package that regulators approve. Secure your assessment at https://stan.store/SmartBiotech/p/get-cmc-clarity-in-1-week--investor-ready
Ever stood in front of a room full of executives, your heart pounding, wondering why your brilliant science isn't connecting? You've spent months perfecting your data, yet their eyes are glazing over faster than cells in a bad freeze-thaw cycle.
I get it. You're thinking, "I'm a scientist, not a salesperson. My data should speak for itself." The frustration is real—you've dedicated your career to rigorous methods, not crafting stories. It feels almost wrong to "package" your science, like you're somehow betraying your training.
But here's the truth: the most brilliant bioprocess technology in the world changes nothing if nobody understands why it matters. In the next ten minutes, I'll show you exactly how I transformed a technical presentation into a compelling story that won first prize—without sacrificing scientific integrity. Your ideas deserve to be understood, not just documented. Let's begin.
This concept is discussed in greater detail in the Smart Biotech Scientist Podcast, hosted by David Brühlmann, founder of Brühlmann Consulting.
Picture this: A Zoom call with ten small boxes showing judges' faces, most with cameras off or looking down at other screens. We were one of the many teams to present that day, armed with a few slides filled with our most compelling data - carefully curated process optimization results, key analytical findings, and critical technical specifications that we knew would demonstrate the value of our innovation in our limited 7-minute window.
We had spent weeks preparing. Our science was solid. Our innovation had potential to monitor critical quality attributes in real time. But as we shared our screen, I could feel our opportunity slipping away. In the tiny thumbnails I could see, one judge was clearly typing emails. Another had that glazed-over expression that screams "I'm mentally somewhere else." Without body language cues or eye contact, we were losing them before we'd even begun.
That's when it hit me. These judges – a mix of executives and technical leaders – had to evaluate 10 complex projects in a few hours, all through the exhausting filter of video calls. They weren't specialists in our particular technology. How could we expect them to grasp our innovation's significance when we were just another set of slides on their screen?
Unlike most scientific presentations, we had purposefully chosen a different approach from the beginning. We started with WHY – sharing my experience from just a few weeks earlier when I was part of a troubleshooting team where our innovation would have completely changed the game. I painted the picture of frustrated scientists, failed batches, and a therapy that couldn't reach patients reliably – then showed how our technology bridged that gap.
The atmosphere in the room shifted instantly. Judges put down their phones. They leaned forward. Questions became strategic rather than merely clarifying. While other teams had equally strong technical solutions, we won because our story made our impact memorable and clear.
One judge later simply told us, "Your pitch nailed it." We hadn't changed our technology – we had changed how we communicated it.
This experience transformed how I approach scientific communication forever and it revealed a paradox many of us face as scientists.
As scientists, we spend years mastering technical knowledge – cell culture optimization, analytical methods, protein characterization. We become experts at designing experiments and interpreting data. But here's the painful truth: most of us receive almost no training in how to communicate that knowledge effectively.
Think about the last scientific presentation you sat through. Chances are it started with methods and technical details. The slides were probably packed with small font and excessive data. The presenter likely followed the same chronological structure as a scientific paper. And emotion? Storytelling? These elements were probably nowhere to be found.
This approach works fine in the lab. But step outside that environment, and it fails spectacularly. Why? Because decision-makers – whether they're executives, founders, or investors – have limited time and varied technical backgrounds. Their brains, like all human brains, are wired for stories, not data dumps.
Even at scientific conferences, have you noticed how you remember the presentations with clear narratives while forgetting those that were technically sound but lacked a compelling story? That's not coincidence – it's neuroscience.
Like many scientists do you recoil at the idea of "selling." It feels inauthentic, perhaps even contrary to scientific principles. But here's the uncomfortable reality: you're already selling, whether you acknowledge it or not.
You sell when you pitch your project to leadership for approval. You sell when you ask peers to collaborate. You sell when you present to investors for funding. You sell to regulatory bodies for approvals. And ultimately, you sell to patients and healthcare providers who need to adopt your innovation.
Simon Sinek captured this perfectly: "People don't buy WHAT you do; they buy WHY you do it."
There's a persistent myth in scientific circles that "good science speaks for itself." If that were true, the most funded science would always be the most technically sound. But is that what we observe? Often, the most funded science is well-communicated science – work whose importance is made crystal clear through effective storytelling.
Think of it this way: your brilliant bioprocess optimization means nothing if you can't secure the resources to develop it further. Your groundbreaking assay is worthless if regulators don't understand its value. Your life-saving therapy won't help patients if physicians can't grasp why they should prescribe it.
Let me share a fascinating experiment conducted by Rob Walker and Joshua Glenn. They purchased ordinary objects from thrift stores for around $1.25 each. Then, they added fictional stories to each object and sold them online. The result? These $1.25 items sold for an average of 6259% markup. Objects worth about $129 in total sold for nearly $8,000 – simply because they had stories attached to them.
Why does storytelling have such power? Neuroscience offers compelling answers.
When we experience suspense or cliff-hangers in a story, our brains release dopamine – a neurotransmitter that improves focus, motivation, and memory. When stories evoke empathy, our brains produce oxytocin – the "trust hormone" that builds human connection. And when stories include humor, our brains release endorphins – reducing stress and creating positive associations.
This isn't just psychological theory – it's biological reality. Our brains are literally hardwired to respond to stories.
The simple Pixar Story Spine format demonstrates how accessible storytelling can be:
"Once upon a time there was [blank]. Every day, [blank]. One day [blank]. Because of that, [blank]. Until finally [blank]."
Apply this to science, and suddenly complex information becomes digestible. Abstract concepts gain emotional resonance. Key points become memorable long after your presentation ends.
Consider a technical presentation about cell culture media optimization. You could start with methodologies and statistical analyses. Or you could begin with:
"Once upon a time, there was a promising therapy that couldn't be manufactured at commercial scale. Every day, batch failures threatened patient access. One day, we discovered a critical nutrient limitation. Because of that, we developed a new feed strategy. Until finally, we achieved consistent 95% batch success rates – meaning thousands more patients could receive treatment."
Same data. Completely different impact.
If you're feeling resistant to these ideas, you're not alone. Let's address the most common objections I hear from fellow scientists.
"Storytelling means sacrificing accuracy and detail." This assumes stories and data are mutually exclusive. They're not. Stories provide the framework into which technical details fit. Think of it this way: the story is the map, while data are the landmarks. Without the map, landmarks exist in isolation with no clear path between them.
"Emotion has no place in scientific communication." Research contradicts this directly. All decisions – even technical ones – have emotional components. We justify decisions rationally after making them emotionally. Even the most analytical mind responds to emotional engagement, often unconsciously.
"I'll lose credibility with my peers." This fear is particularly strong among scientists. But examine the most cited papers in your field. Chances are they tell compelling stories about why the research matters. Clear communication doesn't diminish credibility – it enhances it.
Before preparing your next presentation, ask yourself three questions:
Remember, you're not "dumbing down" science when you tell stories – you're making ideas accessible. You're not "selling out" – you're ensuring your science has impact.
Try this simple exercise: practice explaining your current project to a smart 12-year-old. If they understand why it matters, you've found your story. If they're confused, keep refining.
The most brilliant science never changes the world if it stays trapped in the lab. Your ideas deserve to be understood – and storytelling is how you make that happen.
And speaking of making your ideas understood – in our next episode, I'll share a practical framework you can apply immediately. We'll explore the three-act structure for scientific presentations and I'll give you a step-by-step template for what I call the Minimal Viable Pitch – perfect for those crucial 3-5 minute opportunities with decision-makers. You'll learn exactly how to transform your next presentation from data-heavy to decision-ready.
I know what you're thinking right now. "This all sounds great, but I've got assays running, deadlines looming, and a team meeting in thirty minutes. When am I supposed to learn storytelling on top of everything else?"
I hear you. The weight of scientific excellence already feels crushing some days.
But here's the truth: storytelling isn't an extra burden—it's a lifeline. It's the difference between your brilliant work gathering dust and changing lives. Between getting funded or forgotten. Between influencing decisions or being ignored.
You've already mastered complex cell cultures and protein characterization. You've decoded genomic and metabolic mysteries and optimized bioprocesses. Compared to that, storytelling is the easy part.
The world desperately needs your innovations. And you may need additional funding to keep your startup going. Patients are waiting. Don't let your breakthroughs stay trapped in technical jargon and dense slides. Your science deserves to be understood. Your ideas deserve to spread.
And you, brilliant scientist, already have everything you need to make that happen
Need help with an upcoming presentation? Book a free 20-minute consultation. We'll help you get started crafting a compelling scientific story that resonates with your audience.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
🧬 Stop second-guessing your CMC strategy. Our fast-track CMC roadmap assessment identifies critical gaps that could derail your timelines and gives you the clarity to build a submission package that regulators approve. Secure your assessment at https://stan.store/SmartBiotech/p/get-cmc-clarity-in-1-week--investor-ready
Biomanufacturing has always dealt with the challenge of turning vast, complex datasets and intricate production steps into life-changing therapies. But when batch records multiply and process deviations loom, how do biotech teams make sense of it all? In this episode, we move beyond theory to the nuts and bolts of how AI—when thoughtfully deployed—can turn bioprocessing chaos into actionable intelligence, paving the way for the factory of the future.
Our guest, Ilya Burkov, Global Head of Healthcare and Life Sciences Growth at Nebius AI, doesn’t just talk about data wrangling and algorithms—he’s spent years building tools and strategies to help scientists organize, contextualize, and leverage real-world datasets.
Having worked across tech innovation and pharmaceuticals, Ilya Burkov bridges cutting-edge computation with the practical realities of CMC development and manufacturing, making him a trusted voice on how bioprocessing is rapidly changing.
AI isn't just a tool for faster experiments. It's transforming how we develop, how we optimize, and how we manufacture biologics from start to finish. When integrated thoughtfully, it can empower a lot of scientists to improve quality, to accelerate timelines, and ultimately it can help get a lot of therapies to patients faster. AI doesn't replace human expertise — it amplifies it.
David Brühlmann [00:00:28]:
Welcome to Part Two with Ilya Burkov from Nebius AI. In the first half of our conversation, we explored how AI is transforming process development from data overload to autonomous DOE studies. Now we're diving into the challenges many of you are facing: how to organize huge datasets, where to store your data, and what the factory of the future looks like. We'll also get practical, so we'll answer this question: If you want to start using AI tomorrow, where should you begin? Well, let's jump back in and talk about manufacturing's AI-enabled future.
You work in all kinds of industry, or your company is working in different parts of the industry or even other industries. But previously, when I was working in technology innovation, we would always look above and beyond — look at what other industries are doing and try to learn from them. Because there's no doubt that in a lot of innovations and a lot of trends, other industries are much further ahead than the biotech industry, because obviously the biotech industry is more conservative for good reasons. What do you think? Should we as biotech scientists learn from other industries? Where should we leverage, whether it's technology or a mindset or ways to collaborate, in order to make bioprocessing even better?
Ilya Burkov [00:03:04]:
Yeah, that's a great question. When you look at it, no two processes are the same. The idea is that when you are working with huge volumes of data — everybody from drug discovery and development to genomics to even imaging data from CTs or MRIs — you're working with a lot of unstructured data. The understanding of how to label it, how to prepare the data beforehand, is key, because there's no good in having tons and tons of data but then you can't use it for any kind of workloads. You don't have deep understanding of what this means. So I think that understanding the statistical background and understanding how we can use that data is key.
A lot of mathematics and a lot of algorithmic work is needed, irrespective of which industry you're coming from — understanding how you can really structure that data, how you can prepare it for a lot of the training runs. That's how I would say it. There are a lot of reports, and humans might miss a lot of these things. If you're not programming the code in the right way, the code is not going to fix it magically for you. So that needs to be done at hand.
In terms of specific industries, I don't think one industry does it better than the other. It's just that they're working with different types of data. When you're looking at drug discovery and drug development, a lot of the pharma companies are sitting on exabytes of data. It doesn't mean that they are ready to be used immediately.
David Brühlmann [00:04:29]:
Yeah, unstructured data — that's a huge challenge. Because if you look, for instance, at the manufacturing department, they have batch records, they have investigation reports, they have operator notes, they have all kinds of analytical data and so on. I mean, there's so much going on. And now, as companies are going toward real-time release, I'm wondering: what is the right way to go about that? Like, where do you start, and how do you actually make sure that finally you organize the data in the right way? Can you give us some advice here?
Ilya Burkov [00:05:01]:
Sure. So, I mean, generative AI can read and summarize and connect these data sources to really identify the patterns and root causes. It can be used as a tool to transform a lot of the raw information into more actionable intelligence. It can help teams prevent future deviations if there have been any in the past, and really optimize the processes a lot faster. And AI doesn't replace the operators, as I said. It doesn't replace the process engineers — it augments them. It is able to be used as a tool to read and synthesize unstructured reports.
AI itself provides insights that humans might miss. So you use it as a guide or a smarter decision process for the next batch. It's like giving a team a microscope for process intelligence, helping every production run learn from the last. You're giving them very, very clear insights to understand and then decide what direction to take from that.
David Brühlmann [00:05:59]:
And any advice about how to structure the data — like what system to use, especially if you don't have that many resources? Can you use a simple database for that, or do you need a sophisticated program? I mean, you see a lot of people building some simple AI stuff. I'm not so sure if this is suitable for the biopharmaceutical world, but what is your take on that?
Ilya Burkov [00:06:24]:
There are various tools out there — both commercially available tools that work with databases and large datasets. In Nebius we have an in-house tool called TractoAI, which is used to basically accelerate the pre-training of the data, to label it, to identify the data, and also prepare it for a large training round. But there are a lot of different tools out there on the market. It just depends on the volume, size, and what people are using. For us, for high-performance compute, Tracto is very good at working with petabytes and exabytes of data. So when teams have huge volumes of information, especially if they haven’t structured it very well, that's what we recommend.
David Brühlmann [00:07:05]:
And I guess speed is also an issue once you have that much data, right?
Ilya Burkov [00:07:09]:
Yes, speed. Because even though it's text data, if you're working with petabytes or exabytes of data, everything's going to slow down. You make one mistake, you're going to have to repeat it. And those kinds of processes typically take a few hours to a few weeks to run. So having the right tools in place will significantly help you reduce that time to market as well. Before the next training round is done, the iteration period needs to be as low as possible to decrease the time that you would spend on those processes. Nobody likes to work with it — it's not a fun thing to do — but it needs to be done, because otherwise you're going to get rubbish coming out.
David Brühlmann [00:07:48]:
I'd like to touch upon another part of manufacturing because we hear a lot about the factory of the future, and I know in bioprocessing a lot of people have talked about Industry 4.0. We probably are now at Industry 5.0 with all the AI and so on. What is the new trend there? Is bioprocessing going to evolve in the next few years?
Ilya Burkov [00:08:11]:
That's a great question. I'd love to think about the future and really understand. We don't know the answers, but I would say that having an AI-enabled biomanufacturing facility that is fully interconnected — and by interconnected I mean sensors across every unit operation from cell culture to purification to fill-finish — a factory that's able to feed all the real-time data into an AI system. And then these AI systems analyze, predict, and optimize a lot of this continuously, live-adjusting various parameters autonomously to maintain the yield, the quality, and the consistency of the product that's being made.
It's a self-learning ecosystem where every process informs the next. Having that automated would be incredible. Having this AI-enabled biomanufacturing process transforms production into a much more predictive and responsive ecosystem, where every part of the system corrects any mistakes or deviations that might occur in real time before they impact production and before the facility wastes time and effort. So the factory of the future for me is much faster, smarter, and far more reliable than anything that we see at the moment. Having those various checkpoints in place to automate the system — I think that would be ideal.
David Brühlmann [00:09:39]:
Yeah, it's definitely going to be exciting to see where everything is going. It's very hard to predict — there's so much going on every day. Let's make this now very practical, because let's assume I'm a CEO of a startup company and I want to make best use of all the AI technologies or the new technologies out there. Where should I start? I mean, it can be very overwhelming, and it can also be dangerous to jump on every train. You know, there's so many things going. What should I pursue, and maybe where should I wait?
Ilya Burkov [00:10:12]:
That's a great one as well. I've been asked that a lot in general, especially with the teams that are just adopting AI, just starting to work with it. But the key, as I always say, is to start with clean and structured data. Because even a single process unit, like a bioreactor, can generate very valuable insights if the data is well organized. From there, the teams should focus on very high-impact areas like feed strategies or media optimization, where small, tiny incremental improvements in process parameters can yield quite significant gains in the end product.
And you don't need to have a full AI team from day one. Start with the data you already have and really clear measurable goals. Don't try to optimize everything at once — it's impossible to do. Pick one process that is well instrumented and has a high value — for example, upstream cell culture or a purification step. Collect the historical data and the real-time data. Apply simple predictive models, use those insights to then guide the decisions. And once you see that, make it very measurable and data-driven. I like to say data says a lot more than opinions. Have those measurable improvements so that you can expand to other areas.
Small pilot projects are the fastest way to demonstrate a return on investment and build confidence. So AI is the tool that can amplify a lot of this work. And that's how I would say is the best way to start — by using AI to identify these patterns and predict the outcomes in one area. And the scientists always remain in control. They interpret the results and they guide the next steps. But that's the collaboration that's needed between technology and the scientists at the end of it. And don't think that you have to do everything at once — step by step. Start somewhere, work with that, and progress.
David Brühlmann [00:12:06]:
Yeah, exactly — start simple, start somewhere, keep going, and then improve as you progress. Add another layer as you grow.
Ilya Burkov [00:12:16]:
That's what science is. That's what science has been for centuries. You learn, you iterate, you repeat, and you improve.
David Brühlmann [00:12:23]:
Before we wrap up, Ilya, what burning question haven’t I asked that you think our biotech scientist community should hear about?
Ilya Burkov [00:12:32]:
Oh, that's an interesting one. I suppose you haven't asked much about using compute on-prem versus cloud. We can talk about that and the pros and cons of each system. You touched a little bit at the beginning on security and safety, but we also need to highlight that the level of scale is very different.
If someone is doing on-prem compute, they might have a section of their building filled with GPUs, with the required power to run them, and so on. But say they discover a process that makes things a lot easier for them—but they need more compute, as they say in the industry, “to throw more compute at it.”
To do that in-house, they’d need to build the next building next door, host a few thousand more GPUs, connect all the power, make sure everything works, and get it all up and running. That process—from start to finish—would take months, if not years, for inexperienced groups, and it costs a lot of money. Millions of dollars are at stake there.
Whereas the other option is to keep the workload they’re comfortable running on-prem, and rely on the cloud for burst capacity or expansion needs—because those needs aren’t always constant. They may need 100 GPUs in-house, but then have a workload that needs 1,000 for half a year or a year, and after that period, they don’t need that many anymore.
If they build for the maximum in-house, it will be wasted—it won’t be used. So having the flexibility to scale up and down is key.
And I think a lot of companies are at a stage now where they compare the cost of building on-prem versus using cloud compute. Sometimes they don’t factor in everything. They think that if they buy it and use it, they won’t need to think about it later. But GPUs get outdated quickly. Every few years you get a newer, better version, and you’re essentially investing millions—if not hundreds of millions—into infrastructure that will be redundant in five years. So why take on all that burden and cost when you can rely on a cloud provider like Nebius?
David Brühlmann [00:14:54]:
Great point. Anyway, that’s an important question—whether it’s computing, relying on a manufacturer, or relying on a CRO for analytics. These are very important decisions: what do you keep in-house and what do you outsource? These are crucial questions to navigate. So, with everything we’ve covered today, Ilya, what is the most important takeaway?
Ilya Burkov [00:15:18]:
Well, the most important takeaway is that AI isn’t just a tool for faster experiments. It’s transforming how we develop, optimize, and manufacture biologics from start to finish. When integrated thoughtfully, it empowers scientists to improve quality, accelerate timelines, and ultimately help deliver therapies to patients faster.
AI doesn’t replace human expertise—it amplifies it. The key idea is that the future of bioprocessing is a partnership: humans guiding strategy and AI providing predictive insights and autonomous optimizations. Together, they make processes faster, smarter, and more reliable.
And if there’s only one thing to remember from this entire podcast, it’s that investing in clean data, AI-driven tools, and skilled teams today is how companies stay competitive tomorrow. It determines how they grow, how they expand. Organizations that embrace AI now will define the next generation of biomanufacturing. Those that don’t risk being overtaken by the ones that do.
David Brühlmann [00:16:36]:
Thank you, Ilya, for sharing your perspective on AI and where the industry is heading. I think it’s an important conversation. Where can people get a hold of you?
Ilya Burkov [00:16:48]:
Absolutely. LinkedIn would be the first place to start. Nebius.com as well—we have a dedicated Life Science and Healthcare section. Feel free to reach out to me on LinkedIn or via email, the usual connecting sites.
David Brühlmann [00:17:02]:
All right, great. I’ll leave the information in the show notes. And Ilya, thank you very much for being on the show today.
Ilya Burkov [00:17:08]:
Thank you, David. It’s been really fun.
David Brühlmann [00:17:11]:
This wraps up our conversation with Ilya Burkov from Nebius AI. From predictive scale-up to autonomous production ecosystems, we’ve seen that AI isn’t replacing bioprocess scientists—it’s amplifying what you do best. The future of biologics manufacturing is happening now, and you are part of it.
If you found value here, please leave us a review on Apple Podcasts or wherever you’re listening from. Your support keeps this show going, so thank you so much. I’ll see you next time, and keep doing biotech the smart way.
All right, smart scientists—that’s all for today on the Smart Biotech Scientist Podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite podcast platform. By doing so, we can empower more scientists like you.
For additional bioprocessing tips, visit us at www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Ilya Burkov
As the Global Head of Healthcare and Life Sciences Growth at Nebius AI, Ilya Burkov focuses on driving cloud adoption across the EMEA region. His background, which includes a PhD in Medicine and eight years in the life sciences sector, allows him to bridge the gap between complex healthcare challenges and advanced AI solutions.
His role involves planning and executing sophisticated projects to deliver genuine value to customers and partners. He is dedicated to maximizing growth, managing significant portfolios, and cultivating strong relationships with C-level executives. Ilya is passionate about leveraging strategic methods and data analysis to accelerate innovation and transformation in healthcare.
Connect with Ilya Burkov on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
Across biotech labs, researchers swim in oceans of process data: sensor streams, run records, engineering logs, and still, crucial decisions get stuck in spreadsheets or scribbled into fading notebooks. The challenge isn’t having enough information—it's knowing which actions actually move the needle in cell culture productivity, process stability, and faster timelines.
This episode, David Brühlmann brings on Ilya Burkov, Global Head of Healthcare and Life Sciences Growth at Nebius AI. With a career spanning NHS medicine, regenerative research, and cloud infrastructure, Ilya Burkov has lived the leap from microscope to server room.
He’s seen firsthand how digital twins, autonomous experimentation, and cloud-first strategies are shifting the way biologics are developed and scaled.
Bioprocessing teams generate massive amounts of data, but much of it is sitting in silos or spreadsheets. Sometimes I've seen it even sitting in notebooks—paper notebooks. So I think AI changes that by creating a living model of the process. It learns from the normal behavior of a cell culture, looking at the runs that are being processed, and it starts flagging deviations before they become a problem.
A human needs a lot of training to be able to do that. So instead of reacting to data, a lot of the teams can now anticipate what's going to happen. They can start adjusting feed rates or temperatures or harvest timings in real time.
David Brühlmann [00:00:43]:
Welcome to the Smart Biotech Scientist. Today's episode might transform how you think about bioprocess development. I'm sitting down with Ilya Burkov, who is a Global Head of Healthcare and Life Sciences Growth at Nebius AI, who is bridging medicine, data science, and manufacturing reality. We are tackling the question every process-development scientist is asking: Can AI actually help us scale faster and smarter, or is it just a hype? From drowning in analytics data to autonomous labs running your DOEs, let's cut through the noise and find what works.
Ilya, welcome. It's good to have you on today.
Ilya Burkov [00:02:37]:
Thanks, David. It's really great to be here. Thanks for having me.
David Brühlmann [00:02:41]:
Ilya, share something that you believe about bioprocess development that most people disagree with.
Ilya Burkov [00:02:48]:
So that's a great start to the session. I think most people still think that bioprocess development is mainly an experimental science run more on batches, collecting more data, tweaking parameters, and so on. But I actually believe that it's becoming a computational science, and the best insights won’t come from just the wet lab experiments. Relying on those is no longer viable, but from smarter use of AI and simulations. That's the key to the progress and the development that's been going on.
The bioreactor of the future will be as much digital as it is physical. So a lot of people still believe that intuition and experience are what drive really great bioprocess designs. But I think that time is changing, and the new generation of models are trained on enormous biological and process datasets, and that's when they're starting to see a lot of patterns that we cannot. I believe that human expertise will continue to guide the strategy going forward, but AI will drive a lot of the execution.
David Brühlmann [00:03:53]:
Yeah, it's definitely exciting to see where the industry is going. And before we dive a bit further into this topic, let's talk about yourself, Ilya, because you have a unique background. Tell us how you started in medicine and now actually ended up in AI. That's quite a stretch, isn't it? And tell us a bit how this all came about. I'm really curious about your story.
Ilya Burkov [00:04:15]:
It's been a wild journey, David. I'm the Global Head of Healthcare and Life Sciences Growth here at Nebius, based in the UK. I've been leading the vertical for about 14 months or so. We've already seen quite incredible innovation happening over that period. It's almost like you can say there's normal time and then there's AI time—it's like dog years, if you like.
I joined Nebius with over 15 years of experience within this sector, within the healthcare and life sciences sector. Before my time in Nebius, I spent about three years at AWS Cloud Services, which was a great experience into the cloud, as they were initially the pioneers of a lot of the cloud technologies that we use today. And early on in my career, as you rightly said, I worked in the NHS in orthopaedics at Addenbrooke’s Hospital in Cambridge. I worked on biomarker identification for early disease onset, and particularly looking at things like osteoarthritis, osteoporosis, and before that I worked in regenerative medicine. I also dabbled in some biomedical engineering as well.
And kind of combining all of those things, I've always been fascinated with how biology works as a system. Messy, it's complex, but at the same time, I think it's very incredibly elegant. During my PhD, I spent hours and hours watching how small processes are changing everything. And later on, when I started working with a lot of the large-scale AI and compute infrastructure, that's when I realized that this is exactly the kind of complexity AI was built for. That, for me, was the aha moment. AI could finally let us see, predict, and really optimize biology in a way that human intuition could never do on its own.
So medicine taught me how fragile and complex life is, but technology has really been able to show me how powerful data can be. And that's what really fascinates me. And I know that AI will fundamentally reshape how we design and manufacture a lot of the biologics going forward.
David Brühlmann [00:06:10]:
We are in AI times, as you said. It's amazing. On the other hand, there is a huge challenge because now, as a bioprocess scientist, you're drowning in data. You can measure pretty much everything, you can generate a lot of data. But how do you actually make sense of this data? And how can you leverage the data you're generating to make better decisions in real time?
Ilya Burkov [00:06:36]:
Yeah, you're absolutely right. Bioprocessing teams generate massive amounts of data, but much of it is sitting in silos or spreadsheets. Sometimes I've even seen it sitting in notebooks—paper notebooks. So I think AI changes that by creating a living model of the process. It learns from the normal behavior of a cell culture, looking at the runs that are being processed, and it starts flagging deviations before they become a problem. A human needs a lot of training to be able to do that. So instead of reacting to data, a lot of the teams can now anticipate what's going to happen. They can start adjusting feed rates or temperatures or harvest timings in real time to really protect the quality and the yield.
So what's happening now is a rise of digital twins. That's something that's coming up very frequently. To put it simply, they're just AI models that mirror what's happening inside a bioreactor or a purification process in real time. They continuously learn from the sensor data and analytics. They help the operator test those what-if scenarios before even touching the actual system. That's something that no human can do at that scale, especially if there are multiple sites in question. And it's like having a virtual bioprocess engineer essentially working alongside you. They can spot the patterns, they can predict outcomes, and really suggest optimizations that a human would most likely miss.
David Brühlmann [00:08:05]:
Where do you see the most immediate value of AI in these applications?
Ilya Burkov [00:08:11]:
I think in general for optimization of these processes, but also for understanding where we can save time and effort and accelerate a lot of the workflows. So as always with most businesses: how can you save time and money with a lot of this? And if you can automate the processes and you can use the data to accurately automate them, I think that's the biggest value.
David Brühlmann [00:08:35]:
Let's look at several applications—let me say it differently, several stages of the development. If we start early on in the drug discovery, what do you see happening there and what are the key drivers of change there?
Ilya Burkov [00:08:51]:
So we're seeing the development timelines compress dramatically. Going from idea to clinical trial has been compressed. I see that AI-driven lab automation is now allowing a lot of the initial processes that have taken months to now be done in weeks, or processes that have taken years to be done in months. So reduction in those timelines. There's a lot of robotics that's being involved and included in drug discovery. It can run hundreds of small-scale experiments in parallel, while at the same time AI models learn from every run and those parameters are refined and adjusted in real time. So for me, it creates this kind of continuous feedback loop between the design, the experiment, the insights, and then essentially what you're trying to do is get that early development and get that drug to market quicker. So for me, what's happening is a convergence of robotics, AI, and very high-performance computing.
David Brühlmann [00:09:53]:
And what do you see at the next stage? Once you have a protein, you have a sequence, you need to produce that in the cell line. Do you see AI technologies come in, for instance in the vector construction, or even in the cell-line selection? What is going on in that space?
Ilya Burkov [00:10:12]:
Absolutely. So the magic lies in how fast the loop now closes. AI models are designed to improve protein sequences or cell lines, various automated systems then test them, and the results are given pretty instantly. They're fed back into that model, and every cycle—when you're looking at that—makes the algorithms a lot smarter. So instead of waiting months for manual cloning and screening and understanding, you can now iterate dozens of times in a few weeks.
And again, that saves time, that saves money, that saves the capacity of the people working on it. It's the same principle that made software development faster, just applied to biology. You don't need to do these things manually anymore. There's a lot of advancements that can be done and accelerations that can be made to amplify this.
David Brühlmann [00:10:59]:
What is your vision? I'd say—do you think we will always have a combination of in silico methods and wet lab, or do you think eventually in certain areas we will not run experiments anymore?
Ilya Burkov [00:11:15]:
I think there will always be experiments. I don't think that it's going to reduce the wet lab work. I think what it can do is reduce the number of failed wet lab experiments, so it will be able to predict which ones need to be done and run in a wet lab and which experiments don't, because it can virtually iterate a lot of the processes and say, yes, this makes sense or this does not make sense. So I would say there's always going to be a combination of analytical and computational workflows as well as wet lab work. It's just that we will have less failure rate for the wet lab because you can predict it in advance.
David Brühlmann [00:11:52]:
You mentioned that AI enables us to go much faster and this results in lower costs. And I think this is a big driver and a big need in our industry because unfortunately a lot of therapies are not accessible yet to a widespread population because of the cost. And one area where you can save also a lot of costs is in the whole process development. So if you can speed that up, if you can reduce the number of experiments, you can save a lot of money. If we focus now on the process development—the screening—a lot of companies run screening experiments, whether it's in 96 deep-well plates or it's in Ambr 15, Ambr 250. What do you think is the most powerful strategy to go about to accelerate process development?
Ilya Burkov [00:12:39]:
Yeah, that's a great question. And machine learning is changing the mindset from “test everything” to “test the right things.” It learns by looking at historical process data across multiple parameters. So in bioprocessing, looking at temperature, pH, feed strategy, yield and so on, machine learning models can predict which combinations are most likely to succeed. Instead of running hundreds of experiments, the teams might only need to focus on a few dozen to reach the same level of process confidence. And that's how you compress months of trial and error into a few targeted iterations.
So even when you're looking at every bioreactor run that produces a huge amount of data—and historically most of it has just sat in notebooks and spreadsheets—machine learning models can now capture this data across these runs, learn from the patterns, link the process parameters, and understand how that affects the outcomes and the quality. So the more data that you feed into them, the smarter they get. Meaning that the new experiments add exponentially more insights, and that results in itself into a faster, more robust process with fewer total runs.
David Brühlmann [00:13:53]:
And what would your advice be? Because I think what you're saying—that you have a lot of data sitting somewhere, even sometimes on a spreadsheet or in a notebook or somewhere—I think bigger pharma has done a lot of work in that to streamline it. But I think where it can be challenging is in the smaller company or a mid-sized company where you either don't have the expertise in-house or you just don't have the resources because you have to focus on your assets. What would be some simple strategies to store your data in the same place or in the same format, or whatever that might be, to leverage your data?
Ilya Burkov [00:14:30]:
I mean, it really depends on the workflow and what you're used to—how you handle the data, actually getting access to it. But cloud seems to be a very good answer to that because there are a number of locations that some of this information is coming from. You need to make sure that it's happening in real time. You need to have continuous understanding of what data you have access to. When you're looking at GPUs and using them as a powerful tool for a lot of this acceleration, again, doing that on-prem and in one site is a limiting factor. I'd say accelerating the typical workflows, adding the data into the cloud infrastructure that's safe and secure, would be the biggest starting point for that.
David Brühlmann [00:15:13]:
Speaking of cloud, I hear now some people probably saying, well, wait a minute, there is a security concern or we have highly sensitive data. How do you handle these kinds of objections?
Ilya Burkov [00:15:26]:
Absolutely. I mean, the same way that the data is stored securely on-site. If you work with a company like Nebius, who takes security as the starting block for everything we do—we have all of the ISO certifications necessary to adhere to the worldwide standards. We have SOC 2 Type II, we have HIPAA, we have all of the essentials for storing the data. But also we have the specific locations which are very, very secure in the sense that even the locations themselves—when they build a data center—there are standards that need to be adhered to so that if there are any fires in the nearby area or if there are any conflicts in the geography, the data center is fully secure and fully protected.
Both from environmental factors, but also risk of hacking or physically actually getting in. Those are very high-level, highly secure facilities. You can visit some of our data centers that we have around, and you'll have to get your passport out just to enter.
David Brühlmann [00:16:26]:
Now, a lot of people are talking about using AI, machine learning models, digital twins in the development space. I think a lot of people have found powerful ways to significantly reduce experimental runs and leverage data to predict outcomes. I would like to have your perspective on what comes after process development when you scale up into large scale, whether it's pilot scale or even commercial scale, because that's usually where a lot of things can go wrong if you have not done the homework. If you have well-characterized your bioreactors, usually it's almost a routine operation, but you need to put a lot of effort into that to fully understand what's going on. How do you see, at the moment, what are some powerful technologies we can use to simplify and streamline scale-up?
Ilya Burkov [00:17:18]:
Yeah, absolutely. I mean, scale-up is where biology meets economics, and it's also where a lot of the good ideas fail. I'd say that AI helps by identifying a lot of the scale-ups that are at risk before they become expensive. Model training on multiscale data from lab, pilot, or production runs can predict how parameters like mixing, oxygen transfer, or shear stress will change at larger volumes. That lets teams adjust the process early rather than discovering issues after investing millions into a new bioreactor suite or whatever workload they have.
Every process run leaves digital fingerprints — how the cells respond, what conditions caused drift, or what correlated with yield. AI can be used to connect those dots across scales and really understand what makes a process stable or fragile. And when you scale up, you're not guessing; you're building on a data-driven, very good understanding of what actually drives consistency. That is the difference between hoping a process scales and factually, with data, knowing that it will. I think that's key.
David Brühlmann [00:18:35]:
Have you seen case studies where, for instance, companies combine different kinds of data? Let's say people are doing CFD simulations, and then you have process data, you have more engineering data. How does AI facilitate that? Because I think AI is pretty good at connecting the dots between various data sets and very diverse things. What are you seeing there?
Ilya Burkov [00:18:59]:
Yeah, absolutely. AI is very good at doing very targeted workloads, and it's up to the human to connect those outputs from individual workloads. It's not going to do something for you unless it's trained to do that. So, as you said, if you're looking at one process, you find the results; you look at the next process, you find the results. The human is then in the loop to understand which direction we need to take, which information we feed back from one process to another, and how we can combine that. AI is not going to do it for you — there's no AI button. It's just going to help you do that workload faster.
You can have all of those records, you can have all those process deviations, everything in place. Generative AI is transforming how a lot of these companies make sense of the data. They can look through PDFs in real time, they can look at spreadsheets or handwritten logs. But it's the person, the scientist, who can qualify and quantify that just to make sure they're heading in the right direction. I think that's the best way to explain it in terms of efficiency and workflow. There's a lot of historical data that needs to be analyzed, and without having the person in the loop, there's no efficiency in that.
David Brühlmann [00:20:08]:
That's where we'll pause our conversation with Ilya Burkov. We've explored how AI is moving from buzzword to bioprocessing tool, helping you make sense of mountains of data, optimize upstream and downstream operations, and slash development timelines through autonomous experimentation. In Part Two, we'll tackle where you should store your data and what the factory of the future actually looks like. If this resonated with you, leave a review on Apple Podcasts or your favorite platform. It helps fellow scientists like you discover these conversations. See you next time.
For additional bioprocessing tips, visit us at www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let's continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Ilya Burkov
Ilya Burkov, Global Head of Healthcare and Life Sciences Growth at Nebius AI, is a pivotal leader driving cloud adoption across EMEA. With a PhD in Medicine and 8+ years of industry experience, he specializes in executing complex projects that deliver measurable value.
Ilya has a successful track record of maximizing profit, managing large-scale contracts, and building strong relationships with C-level stakeholders, fueled by a passion for innovation and transformation in healthcare and life science.
Connect with Ilya Burkov on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
If you could run an experiment in your computer instead of the lab—and it actually gave you answers worth acting on—would you try it?
Biologics formulation is often described as a high-stakes puzzle. Every recombinant drug is a chemical balancing act: choosing the right excipients, predicting stability, and sidestepping months of trial-and-error. But what if you could speed things up with a virtual test drive?
In this episode, David Brühlmann sits down with Giuseppe Licari from Merck Healthcare, whose expertise is quietly reshaping how proteins reach the clinic.
Giuseppe Licari brings a hands-on perspective to computational formulation development. With a track record in applying molecular dynamics simulations to real-world drug development, he’s not just theorizing about the future—he’s showing what’s possible now.
We’ve seen that in silico methods have been around for many years, and they are now a standard tool to support our work across several steps of drug discovery and development. I think they’re here to stay for the years to come.
So my message is: don’t be afraid to use them, explore them, and be curious. If these methods are helpful, why not embrace them?
David Brühlmann [00:00:36]:
Welcome back to Part Two with Giuseppe Licari from Merck Healthcare, where we’re tackling the toughest formulation challenges in biologics development. Check here part one of our conversation.
How do you predict aggregation before manufacturing? What can simulations tell us about excipient-protein interactions? And when is it time to stop computing and start experimenting in the lab?
Giuseppe shares practical workflows, real success stories, and honest limitations of computational approaches. Plus, he’ll give one actionable step you can start using tomorrow.
David Brühlmann [00:01:14]:
Let’s jump back in. So we’ve done our homework: we’ve shown that our molecule is developable, we’ve assessed formulatability, and now we’re developing the proper formulation for the recombinant drug.
What are the specific in silico approaches you’re using? It feels to me like a very difficult puzzle — so many chemical components, different concentrations, and combinations. How do you find the needle in the haystack?
Giuseppe Licari [00:03:03]:
First of all, it’s important to remember that for formulation development, you need to look at how the protein behaves in its environment over time.
You can’t base your assessment on a static picture of the antibody — you need to watch the “movie.” In our field, this is generally done using molecular dynamics, a technique in computational chemistry that allows you to see how molecules move. You can literally see the protein dancing, if you want to imagine it like that, and observe how its conformation changes over time.
When you add excipients or buffers, you can see how those elements interact with the protein. From these interactions, you can extract conclusions about how the excipients might affect protein stability or alter its properties.
This is a critical point: looking at the protein “in motion.” In many ways, it’s like performing an experiment — but computationally. You simulate what happens in the lab: taking your drug substance, putting it in a specific environment, and observing its behavior over time.
Of course, it’s not exactly the same as the lab, but it’s a semi-realistic representation of reality. And it can still provide valuable, actionable insights that help guide your experimental work.
David Brühlmann [00:04:49]:
And how do you go about real stability studies? This is what takes time — you can’t compress it. Well, obviously, you can use stress conditions, but it still takes time to confirm outcomes. How do you combine in silico methods with real stability studies?
Giuseppe Licari [00:05:08]:
Of course. Real-time stability studies, like those used to assign shelf life, can’t be directly computed or simulated using in silico methods. One limitation is that simulations can only cover short periods of time — you can’t simulate six months of stability. So, that’s out of the scope of computational methods.
However, long-term protein stability is closely tied to the intrinsic properties of the molecule. That’s what we aim to study: which molecular properties correlate with long-term stability. Once you understand these connections, you can tweak formulations to adjust the protein’s behavior and get an estimate of long-term stability, even if the simulation only covers a short time. This is the strategy we typically follow.
David Brühlmann [00:06:11]:
Looking ahead, our industry is evolving rapidly with all kinds of technologies — AI, machine learning, and so on. Where do you see formulation development going in the next few years? How will we develop formulations for recombinant proteins?
Giuseppe Licari [00:06:36]:
In the AI space — with machine learning — there are several efforts to predict protein formulation behavior. You need a significant amount of data to predict outcomes, like aggregate levels or low molecular weight species. One current limitation is that we don’t yet have enough data to build models that are robust across many proteins and systems.
That said, AI is already helping in discovery. Generative AI can design proteins with fewer chemical liabilities and improved developability. Improvements early in discovery will have significant downstream effects, including on formulation development and other steps. Optimizing proteins from the start can make the entire process faster and more efficient.
David Brühlmann [00:08:06]:
Yes, and with more advanced generative AI models and more powerful computational techniques, you might even be able to select the optimal sequence and predict its ideal formulation — if I’m thinking futuristically.
Giuseppe Licari [00:08:27]:
Absolutely. I’m looking forward to when in silico methods can predict the optimal formulation and experiments are only needed to confirm the predictions. Right now, we still need some screening and lab tests, but in a few years, we might be able to reduce lab work significantly. This will reduce timelines, lower costs, and allow us to develop more molecules for patients more efficiently.
David Brühlmann [00:09:10]:
And to make our conversation very actionable. I would like now to look into how, for instance, someone who is working in a smaller company could apply that. Because the challenge is always, especially as we look in the future. It’s exciting, we have amazing new technologies. You could do a lot of things. And I think in a larger company — that’s also my experience — you’re pretty fortunate to have a lot of resources. But when you’re working in a startup or a small-to-mid-sized company, you have more limited resources. So what would be your advice to still leverage at least some of the potential of these in silico approaches, even with a smaller budget or without all these specialists in-house?
Giuseppe Licari [00:09:55]:
Yeah, sure. That could of course be a problem for small companies or startups. Maybe the solution would be to do a small feasibility study with an external provider. Nowadays, there are more and more companies providing software-as-a-service, for example, so you can test these approaches through a third party and see if they provide additional information or valuable outcomes for your project.
That’s something achievable even for smaller companies, because in silico methods are generally not very expensive computationally. You don’t need to invest too much to test a few things. So my advice would be to test with external companies and see if it works. Of course, the best solution is to hire a computational scientist internally to really build internal knowledge. It’s always better to have someone in-house, but we need to make compromises all the time.
David Brühlmann [00:11:02]:
Definitely. Absolutely. Before we wrap up, Giuseppe, what burning question haven’t I asked that you are eager to share with our biotech scientists?
Giuseppe Licari [00:11:12]:
Well, maybe “what comes next” in this field. That could be a burning question. My answer would be that with new machines, GPUs, and computational power increasing continuously, in the future we’ll be able to simulate bigger systems for longer periods.
I think simulations will eventually reproduce nearly any step in the development space, supporting more and more phases in drug development. With increasing computational power, we’ll be able to do more and more. I’m really looking forward to seeing how this field evolves in the years to come.
David Brühlmann [00:12:02]:
Giuseppe, what is the most important takeaway from our conversation?
Giuseppe Licari [00:12:08]:
I’d say that in silico methods have been around for many years and are now standard tools to support work across drug discovery and development. They’re here to stay. My message is: don’t be afraid to explore them, be curious, and use them when they’re helpful.
David Brühlmann [00:12:44]:
Yes, scientists, why not use these technologies? Giuseppe, where can people get a hold of you?
Giuseppe Licari [00:12:52]:
LinkedIn is the easiest way. People can search for my name, and I’m happy to exchange with anyone curious about these techniques.
David Brühlmann [00:13:03]:
Excellent. Smart Biotech scientists, please reach out to Giuseppe to exchange on in silico approaches. Once again, Giuseppe, it’s been fantastic. Thank you so much for being on the show today.
Giuseppe Licari [00:13:15]:
Thanks to you, David, for what you do and for the invitation. It was a pleasure for me as well. Thank you.
David Brühlmann [00:13:23]:
What a masterclass in computational formulation development. Giuseppe has given us a roadmap from theory to practice, showing how in silico approaches are becoming indispensable tools in the biotech scientist’s arsenal.
If these insights resonated with you, take 30 seconds to leave a review on Apple Podcasts or wherever you’re listening. Your feedback helps us bring more expert conversations to the biotech community.
Thank you for leaving a review, and thank you for tuning in. Until next time, keep making bioprocessing smarter, one innovation at a time.
Smart scientists, that’s all for today on the Smart Biotech Scientist Podcast. Thank you for joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite platform. By doing so, you help empower more scientists like you.
For additional bioprocessing tips, visit www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Giuseppe Licari
Giuseppe Licari has served as a Principal Scientist in the Computational Structural Biology group at Merck KGaA since 2022, where he helps design and implement digital tools to analyze biotherapeutic molecules. His work includes studying how various excipients contribute to protein stabilization, with the goal of informing and improving formulation development.
Before his time at Merck, Giuseppe worked at Boehringer Ingelheim, where he helped establish computational methodologies for assessing developability and forecasting protein behavior through in silico modeling.
He completed his PhD in Physical Chemistry at the University of Geneva in 2018, followed by a postdoctoral role in the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana–Champaign, focusing on molecular simulations of proteins interacting with biological membranes.
Connect with Giuseppe Licari on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
Imagine trimming years off biologics development—and catching problematic formulations long before the first pipette is even picked up. That’s the promise of computational approaches in protein drug development, shaking the dusty traditions of trial-and-error and ushering in a smarter, more collaborative era.
For this episode, David Brühlmann welcomes Giuseppe Licari, Principal Scientist in Computational Structural Biology at Merck KGaA. A chemist by training, Giuseppe Licari pivoted from hands-on wet lab science to the predictive power of quantum mechanics and in silico modeling.
Today, he stands at the intersection of computation and CMC development, pioneering digital tools to streamline candidate screening, de-risk formulation, and ultimately bring therapies to patients faster.
The change in perspective is that we are now going from having several sequences in developability to having only a single sequence. So that’s the big change in discovery. We have several sequences, and now we need to apply methods to select only one. Then, in development, we have only that one selected sequence — we cannot change it anymore. So that is a very big change.
Historically, there has been a lot of work in the literature on mutating the protein to improve the characteristics of the API. But once the sequence is fixed, there is not so much in the literature on how we can support formulation development under that constraint.
David Brühlmann [00:00:46]:
What if you could predict formulation failures before ever touching a pipette? Today we’re diving into the computational revolution transforming biologics development with Giuseppe Licari, who is a Principal Scientist in Computational Structural Biology at Merck KGaA.
From predicting aggregation hotspots to designing stable formulations in silico, Giuseppe reveals how computational approaches are slashing development timelines and catching problems that traditional methods miss.
Let’s explore how smart science is making formulation development faster, smarter, and more predictable.
Welcome Giuseppe — it’s great to have you on today.
Giuseppe Licari [00:02:42]:
Hi David, it’s my pleasure to be here with you, and thank you for the invitation to your podcast.
David Brühlmann [00:02:48]:
Giuseppe, share something you believe about bioprocess development that most people disagree with.
Giuseppe Licari [00:02:56]:
Well, in my field of drug product development, I believe we should set a “good enough” stability standard for our API to ensure we deliver the product safely and in a timely manner.
We don’t always need to maximize shelf-life stability — at least not for preclinical or Phase I studies. People might disagree and try to maximize shelf-life even early in the project, but I think that in Phase I we don’t need that.
Instead, we should aim to deliver the product as fast as possible, in a safe manner of course, to the patient and see if the project works.
David Brühlmann [00:03:46]:
Yeah, you’re making a great point — and I think it’s important to have a phase-appropriate approach, isn’t it?
Giuseppe Licari [00:03:52]:
Yes, because again, the problem is that we never know if a therapeutic concept will actually work, and we spend so much time and effort at the beginning of a project — and then the project may be stopped because there is no efficacy. So I think we need to target the right amount of effort according to the phase we’re in. And this is true for any function, for any step of the process. Of course, people have different views on this, but I think that as long as we deliver something safe for the patient, we are good.
David Brühlmann [00:04:26]:
I'm looking forward to diving further into today's topic — into developability and also formulatability. But before we do that, Giuseppe, let’s t alk about yourself, because your path from physical chemistry to computational structural biology is fascinating. So take us back to the beginning and tell us what sparked your interest — and what were some interesting pit stops along the way.
Giuseppe Licari [00:04:54]:
Yes, I think it started during my undergraduate studies, when I first encountered quantum mechanics and theoretical chemistry. I'm a chemist by education, and in those courses I discovered the fascinating capability of these techniques to predict molecular properties without performing any experiment.
From the computer alone, we could calculate something “out of the blue.” That was incredibly fascinating to me and sparked my interest in in silico and computational methods.
At the same time, I had a genuine interest in developing new drugs to help patients. So I tried to combine these two passions, and I became more and more interested in computer-aided drug discovery.
Of course, I also worked in the lab during my undergraduate studies and during my PhD, but over time I leaned more and more toward computational work.
A very important pit stop in my career was my three-year postdoc in the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana–Champaign. I learned a lot there, gained extensive experience, and it opened up many perspectives for me. That was probably one of the most important parts of my career.
David Brühlmann [00:06:24]:
Can you paint us a picture? Because you're now at the intersection of computational biology and drug development, including formulation development. What does a typical day look like? Are you sitting in front of a computer all day doing modeling? Do you go into the lab? Is it a combination? What does that look like?
Giuseppe Licari [00:06:46]:
Yes — and importantly, what a computational chemist in a pharmaceutical company should not do is sit in front of the computer all day.
I truly believe it’s essential to constantly exchange with bench scientists, because we need to understand what is most valuable for them and where computational work can really make a difference.
So my daily work involves a lot of interaction with people in the lab, understanding their needs, and figuring out how computational approaches can support them.
Once we identify a need — for example, a specific screening or a particular question in a project — then I work on my side to carry out the in silico assessment. I provide my conclusions and recommendations, and then we discuss again and plan the corresponding lab activities together.
So it’s really a continuous exchange between the computational scientist and the lab scientist.
David Brühlmann [00:07:52]:
Let’s unpack this: developability, formulation development, in silico approaches. Starting from the very beginning — where do in silico approaches shine the brightest in drug development?
And when I say drug development, I include process development and the broader CMC landscape. You have seen many parts of biologics development, so where do you see the greatest benefit of these computational approaches?
Giuseppe Licari [00:08:23]:
First of all, in silico approaches are really vast, and there is a lot that can be done and applied in pharma. I’ll focus on the approaches that are most related to what I do. You mentioned this concept of developability. Maybe not everyone is familiar with it — it’s a relatively new way of thinking. We want to develop drugs that are safe, efficacious, and manufacturable, and the concept of developability was introduced a few years ago to help select a candidate with the highest overall developability profile.
From the experimental perspective, we can run many assays to understand how developable a drug is. However, we can also use in silico methods to screen properties of the API that can predict this developability profile.
So one major application is screening candidates in the final stages of discovery, when we might have, for example, 4 to 10 molecules. In silico methods can be very helpful in prioritizing the candidates — identifying the ones that might be more developable and more manufacturable.
Once the final candidate is selected and development officially starts, we can no longer change the sequence. But we can still apply several in silico approaches to help develop the best formulation. In this case, we don’t modify the sequence, but we can adjust what is around the API — the pH, the ionic strength, salts, surfactants, excipients.
So in silico methods can help filter out conditions that might not be favorable for your API.
David Brühlmann [00:10:25]:
And that’s such an important point — this concept of developability. For those who know me well, they know I’m really passionate about this topic because I strongly believe in starting CMC development early, already in discovery.
Doing this homework early and looking at the molecule’s properties ensures that it’s developable and, ultimately, manufacturable at larger scale.
Can you tell us what is typically evaluated in a developability assessment? What are the minimum protein characteristics you should analyze to make sure your molecule is developable?
Giuseppe Licari [00:11:06]:
Yes. There are several properties we can predict. For example, we can look at the hydrophobicity of the molecule and identify regions that are aggregation-prone; if necessary, we can mutate specific residues to remove these aggregation-prone motifs.
We can also predict the colloidal stability of the molecule — typically by looking at the charge distribution at different pH values, which gives us an idea of how stable the molecule might be in solution.
We can evaluate the chemical stability of the molecule, especially the residues in the CDRs — the complementarity-determining regions that interact with the antigen. These are crucial for antibody efficacy.
We can also assess immunogenicity, using several available computational techniques.
So yes, there is a wide set of properties we can predict, and these predictions can be very helpful in prioritizing the candidate. And you’re absolutely right that this must be done as early as possible — ideally with input from people in development.
This exchange between research and development is really critical, because development scientists can already provide insights related to the formulatability of the molecule. So it’s not only immunogenicity; it’s also whether the candidate can be formulated under the conditions required later in development.
David Brühlmann [00:12:41]:
Oh yes, absolutely — you’re speaking my language here. It’s absolutely crucial. And I’ve unfortunately seen projects where this wasn’t done early enough, and the consequences were severe.
So for anyone listening: start early. If you’re in R&D, communicate with process development and manufacturing colleagues early on to get their input.
Now, I’m curious — let’s take aggregation as an example. When you do these in silico predictions, how accurate are they? And how much wet-lab work is still needed to confirm them?
Giuseppe Licari [00:13:23]:
Sure. The predictions can be quite accurate, but of course no prediction is ever 100% accurate. It depends on the methods you use.
Some approaches are sequence-based, meaning you don’t need the structure of the antibody to predict aggregation. But you can also use the 3D structure, because residues that are far apart in sequence may be close in space and form hydrophobic patches — something you cannot detect from sequence alone. That provides additional insight.
A good way to improve accuracy is to combine information from different methods — integrating sequence-based, structure-based, and other computational models into a holistic assessment.
From my experience, hydrophobicity can generally be predicted quite accurately. However, it’s also important to note that experimentally, hydrophobicity is difficult to measure directly, because aggregation isn’t driven by hydrophobicity alone — electrostatics and other factors play a role.
So when comparing predictions against experimental results, we need to keep in mind that the experimental measurement is itself a composite of multiple contributions.
David Brühlmann [00:15:00]:
And I imagine that the more experiments you run and the more data you generate across different molecules, the better the predictions become — especially as you incorporate hybrid models or even machine-learning approaches to improve accuracy further.
Giuseppe Licari [00:15:19]:
Exactly. If you use machine-learning models, then you really need a significant amount of data. You can associate many properties of antibodies to those data sets, including electrostatic contributions, and this may improve your predictions. This is already being done in several methods.
I think the biggest challenge is actually finding the data — and finding data that is representative of all the possible APIs we might have in development. Nowadays we don’t only have standard monoclonal antibodies; we also have many multispecific formats, ADCs, and other new modalities.
The issue with machine learning is that, once you train your model on certain categories, the predictions may not extrapolate well to new modalities. That’s why I really like physics-based methods — because you can extrapolate. You don’t need experimental data to train the model; you rely on the underlying physics, and you can still generalize to new molecule formats.
David Brühlmann [00:16:36]:
Your work has now evolved from developability into formulation development and formulatability. We’ll talk about formulatability in a moment. I’m curious — how different is your work now, and your in silico approaches, when the goal is to develop a formulation? Having the right formulation is such an important part of CMC development.
So let’s start there. How different is this compared to developability? And then I want to move on to the next question: What are the specific approaches used to come up with a formulation that will work for your biologic?
Giuseppe Licari [00:17:17]:
The change in perspective is that in developability we start with several sequences, whereas in formulation development we work with only one sequence — the selected drug candidate. That’s the big shift. In discovery, we apply methods to select one molecule among many. Once we enter development, we can no longer change the sequence.
Historically, there has been a lot of work on modifying or mutating proteins to improve API properties. But once the sequence is fixed, there is much less guidance in the literature on how to support formulation development.
That’s the space you’re asking about — how to support formulation development using in silico methods. Now the idea is not to change the protein, but to change whatever is around it. The protein is fixed, but in a formulation it “feels” a specific environment — a given pH, buffer species, salts, excipients, surfactants. All of these may influence its behavior.
I am really convinced, and I have plenty of evidence, that simulations and computational approaches can help us understand what happens to a protein in a given environment. That’s the shift when moving from developability to formulation development in silico.
David Brühlmann [00:19:02]:
Earlier you mentioned a phase-appropriate approach. So how early should formulation development start? For example, in Phase I, should you use something “off-the-shelf,” like a platform formulation? I imagine this is easier for antibodies — but what about more complex molecules?
Giuseppe Licari [00:19:25]:
A platform approach can work for standard molecules — for example, for typical monoclonal antibodies. But when you have complex multispecific molecules, as we increasingly see in the clinic, it becomes more challenging. The platform formulation may or may not work.
In silico methods can be very helpful for de-risking your strategy and adjusting your planning. You can start with a broad platform and then use computational tools to filter out conditions that might be less favorable for your specific molecule.
Even for Phase I, you can use in silico approaches to fine-tune your strategy. The advantage is that you can apply computational methods at any time — you don’t need material, and they are relatively fast.
For Phase II, Phase III, or later stages, you can intensify experimental screening and rely more on computational support as needed. But at any phase, you can always go to in silico methods to gather useful information.
David Brühlmann [00:20:46]:
For those not familiar with formulation development, can you explain the difference between formulation development and formulatability? And when should each be performed? Or are they done together?
Giuseppe Licari [00:21:03]:
Formulatability is a relatively recent term, introduced in parallel with developability. It aims to evaluate whether a molecule can be easily formulated during development. So formulatability is assessed together with developability when screening candidates before development starts.
It gives you a forward-looking perspective: Is this molecule feasible to formulate under standard conditions? Or will it be challenging? That’s what formulatability tries to address.
Formulation development, on the other hand, is a work package executed during development — typically within the drug product development group. It is the process of identifying the best suitable formulation for a specific API. Any prior knowledge, including formulatability assessments, is extremely helpful for planning these experiments.
David Brühlmann [00:22:14]:
That wraps up Part One of our conversation with Giuseppe Licari. We’ve explored how computational methods are revolutionizing developability assessments and identifying formulation risks early.
In Part Two, we’ll dive deeper into excipient selection and real-world implementation strategies. If you found value in these insights, please leave a review on Apple Podcasts on your favorite platform.It helps other scientists like you discover these conversations. See you next time in Part Two.
All right, smart scientists — that’s all for today on the Smart Biotech Scientist podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite platform.
By doing so, we can empower more scientists like you. For additional bioprocessing tips, visit www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in the next episode. Until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Giuseppe Licari
Since 2022, Giuseppe Licari has been a Principal Scientist in Computational Structural Biology at Merck KGaA, where he leads efforts to build computational platforms for characterizing and screening biotherapeutic candidates. His work also explores how excipients influence protein stability, providing key insights that guide formulation development.
Before joining Merck, he contributed significantly to Boehringer Ingelheim by advancing in silico methods for developability assessment and predictive modeling of protein properties.
Giuseppe earned his PhD in Physical Chemistry from the University of Geneva in 2018 and later completed a postdoctoral fellowship with the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana–Champaign, where he focused on simulating protein behavior at biological membranes.
Connect with Giuseppe Licari on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
What happens when therapeutic innovation meets real patient urgency? In this conversation, the barriers between scientist and patient all but vanish, bringing clarity—and a new sense of mission—to some of the biggest problems facing advanced therapy manufacturing and delivery.
Meet Jesús Zurdo, a biotech leader whose three decades of experience in innovation took on a whole new perspective when he became a leukemia patient himself. Seamlessly straddling the worlds of industry and patient care, Jesús Zurdo brings a refreshingly honest, systems-level view to cellular therapies, manufacturing bottlenecks, and the realities of getting therapies from the lab to bedside.
For me, the key thing is we are dealing with complex realities and this requires complex solutions. And probably we need to be humble, all of us, I mean all stakeholders. I am a scientist or a professional as a patient about what we can contribute and what we can't. I think we need more people challenging the system, practices and views.
We need to be critical, but we need to be humble about what solutions we bring. I can try to identify holes, but it would be a bit naive for me saying, „oh, I have this solution”, but I can bring it perspective. And I think by getting different stakeholders, manufacturing developers, clinicians, patients into the same room and just looking at what is failing, what is working, what would be the ideal solution, we will be able to develop much better therapeutics.
David Brühlmann [00:00:50]:
In part one, Jesús Zurdo shared how becoming a leukemia patient rebranded—rewrote—his professional mission after three decades in biotech innovation. Now, as both treatment receiver and industry insider, he is tackling the manufacturing and delivery challenges head on. Can point-of-care production work? Will allogeneic therapy solve scalability? What business models could actually democratize access? His patient urgency pushes these conversations beyond theory into practical solutions that could transform advanced therapy delivery.
I would like to talk about a slightly different aspect, but you said, well in this frame, how can we do it without bringing the patient to the clinic to measure all these metrics? This leads me to this point because we hear often about point-of-care manufacturing, especially with stem cells, CAR-T, and so on. What is your perspective? How should this evolve and how can this solve the affordability and more importantly, the accessibility crisis.
Jesús Zurdo [00:03:15]:
I have some points of view and I can share some experience that I come across. I'll tell you one big realization and we were talking about. For me, realizing how stem cell registers have been operating for decades now, very effectively, and how they provide cells to patients. I mean, you look at the quality assessment and the batch release. I mean, my dose of CE was a batch and they had to do some testing before they released. But you cannot just do the classical sterility testing. You cannot do everything you would do traditionally in pharma and talking to people, friends that are working in CAR-Ts that they had all this quality release.
And that adds a lot of time. And that means that you need to freeze the cells and you do all this testing and then you release the batch, but you use lots of sample in testing and then you go and take it to the patient. That adds a tremendous amount of time, that adds a tremendous amount of work and cost. And now if you look at how some people are doing this point-of-care and I mean, there's several examples here, current cross promoting this, the Hospital Clinic in Barcelona, they've been, I think they treated 600 patients or something like that by now, which is pretty impressive, out of a single hospital.
There was this, Galapagos was promoting something similar. Unfortunately, they stopped that. Then this is a different paradigm because you have the apheresis at hospital and then you use the fresh cells, you modify them, purify them, and immediately you put them back into the patient. Because using aseptic technologies, I mean, it's a question of risk assessment, which is what you always do in medicine. You need to test if you've done your validation up front. And then what they see the risk of infection because of bacteria or whatever virus this way into sample is negligible.
Of course you need to validate this, but once you've done that, that means that you cut time tremendously and you can do everything in the hospital. And the only thing you need is to have the viral vector. But this means that you can centralize the viral vector as the key ingredient, make sure the quality is right, and then decentralize the final manufacturing step. It brings down costs, it brings down time, and importantly, the patient doesn't have to wait so long. I mean, there are horrible examples of hospitals where patients die because they don't have the treatments fast enough. So to me, it's not necessarily the solution for everything.
But clearly for autologous cell therapy, it could be a game changer. Not for everything, not for everybody. But in some cases, some experience showing that this has promise and is leveraging what is being in use for many years. I mean, don't reinvent the wheel. Just look at what people have been using. It works. Now let's look at the difficult stuff that is getting in the way.
David Brühlmann [00:05:52]:
And to what extent could we develop more allogeneic therapies? Obviously it's not possible for everything, but maybe in certain cases, instead of an autologous therapy, we could move over to allogeneic and then produce that centrally and ship it, for instance.
Jesús Zurdo [00:06:08]:
Well, we can, and there are examples. Unfortunately, it doesn't seem that the allogeneic cell therapies are getting traction. And I don't know, there are different problems. I mean, on one hand, not every allogeneic cell therapy is built the same way. And there's some cases where probably too much editing or I remember HLA knockouts are not a great idea because then your body thinks you're bringing another cancer. But there's some promises there. It can help in some regards. But I think, you know, it's not going to be a magic bullet.
What I like about allogeneic is that it is off-the-shelf. You have it ready. That means that immediately you can give it to the patient, you shorten the intervention time. And I do believe that, I hope there would be some—I don't know whether it's gamma delta T cells, or it's going to be NK cells, or it's going to be an edited cell, or a combination of all this—that would make it… there is promise there.
However, I think probably in vivo cell therapy has more chances of succeeding. And my take on this is that I think it could revolutionize how cell therapy takes place.
Maybe not how people say it, because I had this conversation—how it's going to bring cost down. I disagree. I mean cost, yes, not price, because we price… I mean, we talked about this before. Pricing of medicines is different. And you look at what is the price of some viral therapies right now which require doses, clearly they do not represent the cost of manufacturing.
However, one thing to me as a patient is transformational is you might not need to use conditioning or lymphodepletion in cancer, but also in autoimmune diseases. And this is huge. It's huge because it reduces risk to patients, it reduces mortality linked to infections, and this is really important. But also it reduces the impact of some of this chemo in your brain, in your body—generally you are stronger, you're able to deal with things in a better way.
And also I like the flexibility that brings. You can be very creative, you can bring multiple—I mean, I know multiple CARs, you can do multiple dosings. Now the issue I think that probably we are not considering enough is the delivery. The delivery remains a problem and no matter how much engineering we would do into the vectors, doesn't matter which LNPs, some viral vectors, there's going to be always some off-target delivery. And this is something we've seen in ADCs in the past. And there were issues with the heart toxicity and liver toxicity, etc.
Now when we're having genetic medicines, this is a different story. If your cargo is integrated or has a genetic impact in the wrong cell type, maybe that is not desirable. I mean maybe I'm worrying unnecessarily, but the problem is translating observations from a lab or animal model into a patient is not trivial.
However, there are options like ex vivo at bedside that people are exploring. And I'm a firm believer that in vivo and the right delivery and the right vehicles could transform completely how autoimmune and how some oncology conditions are treated. I mean I'm really hopeful. I am impressed about the results people are observing.
David Brühlmann [00:09:24]:
Yeah, it's amazing. And it's also amazing to see how fast it moves, how fast it evolves. When you look across the industry, Jesús, what are the trends you see with respect to new manufacturing technologies, with respect to new delivery methods, with respect to new ways to bring the drug to the patients? What is hot right now or where do you think the industry is moving to?
Jesús Zurdo [00:09:49]:
I think this is where I'm a bit disconnected and I don't know if it's… I'm old fashioned but we were talking about overengineering or is just what is the purpose of the innovation you're introducing? And I need to be careful. I think automation, there are beautiful solutions out there, more companies getting solutions. It's very impressive what these platforms can do. And I think automation has a place even at point-of-care manufacturing because that means that you reduce risks, you reduce the human elements. So what I was saying before, people are putting too much emphasis in this. Automation is not a solution to price of goods but it's an important element to introduce in manufacturing consistency. But the problem is that it has to be agnostic.
You should be able to use whatever automation for a given product so you're not having to go through the barrier or buy new equipment in order to manage different cell therapies you are administering out of a single hospital. And this is a problem the industry has to reckon with. We need to have standards. We need to have like in computers, I suppose everybody can use a USB port. So we have an understanding what are the products starting—the apheresis, if you will—that people can start and then you have whatever automation solution but the standards are maintained.
The other I think is, I mean we were talking about in vivo. There is lots and lots of work done and it's fascinating what people are doing these days with these nanoparticles and how they're engineered. Again, I think they have a place. I think they are super cleverly designed. Some of them—it’s fascinating how much science is put in there. My question is again, are we overengineering these things? What problems are we solving? I was talking before about delivery. These solutions, sometimes they retain significant delivery challenges but also other aspects of durability of response, et cetera. We are maybe trying to get the perfect solution before finding what is the real problem. And I was saying maybe a hybrid between in vivo and ex vivo is something that would have a bigger impact and produce much better outcomes to patients.
Going back to patient urgency, rather than going for a super sophisticated technology that would require lots of testing and validation. I mean if I have a new—and I don't want to demonize nanoparticles, the same goes with viral vectors—doesn't matter which platform you have. If I have a super innovative delivery platform, I would need to show that it's safe, that it doesn't go to the wrong place, et cetera. And then the question is which patient will accept to be treated? I mean I'm talking about a patient that is not suffering or a healthy volunteer. It becomes challenging. I would not volunteer for that, but it has to be tested. What are the limitations of these platforms?
At the same time we have solutions that are already working. So why don't we combine some of these super cleverly designed vectors with simpler platforms that can ensure fast adoption in the clinic and then see what we can do with in vivo cell therapy or ex vivo or bedside or whatever. To me the challenge is being pragmatic and recognize the urgency and go step by step. Yeah, let's make sure we can validate physiological effect and then we refine the delivery in due time.
David Brühlmann [00:13:14]:
What do you think will have the biggest impact right away? Because my feeling is there's a lot to be done. It will take time. Is there something that stands out that you think will have an immediate effect and will move the needle significantly?
Jesús Zurdo [00:13:31]:
Two things: 1. Healthcare provider capacity. You cannot make a significant improvement in the adoption of cell therapies if hospitals cannot administer to patients. And this is something that is hidden. People assume it's not just pricing. You could price it whichever way you want, but if the hospital cannot give it to patients because they don't have the right infrastructure, the training or the capabilities, forget it. It will never happen. And now we see it in Europe—clearly problematic in the UK—when our healthcare systems are limited in the amount of money they receive, you tell them you need to invest now in building capabilities, for example for autologous, which is what is right now also maybe in the future for other types of cell therapies, then who's going to pay for that? So that is a big, big bottleneck.
The other—maybe making it… I'm so hopeful—looking at some bold clinical trials. Unfortunately not many in Europe or in the US. I see brilliant things being done in China. Innovation that is coming out of China is unbelievable. Using some of these treatments as first line and it's mind blowing what they observe in some conditions. I mean, you need to take things with a pinch of salt, but it's how they combine—I’m more familiar with the CAR-T arena—how they combine multiple CARs, how they combine, how they administer the treatment, how they combine with other drugs. And there are cases where they're using this as first line for multiple myeloma, for ALL in some cases, without the need of chemo. And they see some impressive remission and good survival without symptoms. And this is really important for me. Now, early days, but I think this, to me, it shows the promise. This could be really revolutionary.
But I know we need to be prudent. We cannot just go completely crazy. But there are, I think there's a case for some conditions to really move these treatments earlier. This is another thing I found out as being a patient. Yes, it's good to have another weapon, if you will, in reserve if the first line of treatment fails. The problem is that these treatments are really, really hurting patients. By the time they are eligible because they have maybe a couple of relapses already, they have issues with their kidneys, they have liver problems and that means that they are too weak in some cases to settle to receive these therapies.
And even if you said, you know, we're going to try anyway, it's less likely they will survive. So if we were using these treatments early on, maybe we would give them a better chance to survive the disease. And I think this is maybe changing and I know many clinicians already promoting this, but they are a bit alone. I mean, there's lots of things need to happen from a regulatory perspective, from a health economics perspective, from a payer’s perspective, that makes it acceptable to provide these treatments early on. And I think this could be transformational.
I also believe that we need to do better—or we need to do more—to improve the cell therapies that are already in the clinic. They are brilliant, but they are not as fantastic as many people think. But we have new knowledge and this is why I think it's important that patients are treated because we learn. We learn when they work, when they don't, we learn about the limitations and that will help innovation, that would help in our second or third generation of therapies.
David Brühlmann [00:16:59]:
Yeah, I believe that if we figure out the economic side and then obviously some other safety side to use these powerful new modalities early on, earlier treatment, and also, as you said, first in line and not end in line, I think this will be a total game changer. I do hope that we are getting there sooner than later. So this has been great. Jesús, before we wrap up, what burning question haven't I asked that you're eager to share with our biotech community?
Jesús Zurdo [00:17:31]:
For me, I think you touched really the important stuff. I think there were very pertinent questions to the point. For me, the key thing is we are dealing with complex realities and this requires complex solutions. And probably we need to be humble, all of us, I mean all stakeholders—I as a scientist or a professional, as a patient—about what we can contribute and what we can't. I think we need more people challenging the system, practices and views. We need to be critical, but we need to be humble about what solutions we bring.
I can try to identify holes, but it would be a bit naive for me saying, „oh, I have this solution”, but I can bring perspective. And I think by getting different stakeholders, manufacturing developers, clinicians, patients into the same room and just looking at what is failing, what is working, what would be the ideal solution, we will be able to develop much better therapeutics.
And particularly, I want to emphasize patient side—for me it has been enlightening how you use these therapies and why people are not receiving it or when they are, even when they're eligible, what happens, why the efficacy can be down? Well, because the reality, the experience in the clinic and at home. And I think this will increase the value of our efforts hundredfold. No doubt about it.
David Brühlmann [00:18:45]:
Jesús, what is the most important takeaway from our conversation today?
Jesús Zurdo [00:18:51]:
I would say, remember, we all are or will be patients. This is important. It's not I'm a scientist or I'm a professional or I'm a clinician and then I'm working for somebody else's benefit. No, no, no. At some point in my life… I will be a patient. And this I think brings an element of humanity and urgency as well. It's not okay to hope or wait for a number of years—going back to urgency—because patients matter and the need happens now. Cutting corners is not the solution, but it's finding what is the big issue we are facing. So if we see that when we work with patients, we're working with ourselves when that will be us in a few years or is now or it was us in the past, I think that would change the conversation.
David Brühlmann [00:19:38]:
What a great way to conclude our fantastic conversation. Jesús, patients do matter. Thank you for reminding us that patients matter. And finally, what we're doing as scientists is for the patient at the end of the day. And thank you also for giving us this perspective that goes beyond just the science. Finally, we are serving the patients that desperately need these life-saving therapies. And thank you also for sharing your own personal experience. Very powerful.
Jesús Zurdo [00:20:06]:
Thank you, David.
David Brühlmann [00:20:07]:
Where can people get a hold of you, Jesús?
Jesús Zurdo [00:20:09]:
Well, I think I've shared with you my email address. The easiest is to find me on LinkedIn and message me. Easy to reach through LinkedIn and I would encourage anybody that has ideas, interests, initiatives or willing to collaborate, please reach out. I think we are all in this together. I'm really happy to work with other people in finding better solutions.
David Brühlmann [00:20:31]:
Smart biotech scientists, please reach out to Jesús. You find the infos in the show notes and thank you once again Jesús for being on the show today.
Jesús Zurdo [00:20:40]:
Thank you David, it's been my pleasure and thanks a lot for hosting me.
David Brühlmann [00:20:45]:
Jesús Zurdo just gave us a masterclass in reimagining how we manufacture and deliver advanced therapies. His unique vantage point as both innovator and patient reminds us why solving these challenges matters beyond the lab. If this conversation sparked ideas for your own work, we'd love a review on Apple Podcasts or wherever you listen. Your feedback helps us reach more scientists and if you need support in development or the manufacturing of advanced therapies or biologics, please check out the links in the show notes. We are here to help you and thank you so much for tuning in today and I'll see you next time.
All right smart scientists, that's all for today on the Smart Biotech Scientist Podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite podcast platform. By doing so, we can empower more scientists like you. For additional bioprocessing tips, visit us at www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let's continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Jesús Zurdo
With more than two decades of experience in the biopharmaceutical industry, Jesús Zurdo plays an active role in advancing therapeutic development and improving patient access. His background spans cell and gene therapy, cancer immunotherapy, and executive coaching, complemented by the unique perspective he brings as a leukemia survivor.
He contributes to the field as a Non-Executive Director at Telomere Therapeutics and as an Expert Jury Member for the EIC Accelerator Program, collaborating with organizations to progress in next-generation therapies. Committed to genuinely patient-centered healthcare, he combines scientific expertise with lived experience to help drive innovations that deliver real value to patients.
Connect with Jesús Zurdo on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us