If you could run an experiment in your computer instead of the lab—and it actually gave you answers worth acting on—would you try it?
Biologics formulation is often described as a high-stakes puzzle. Every recombinant drug is a chemical balancing act: choosing the right excipients, predicting stability, and sidestepping months of trial-and-error. But what if you could speed things up with a virtual test drive?
In this episode, David Brühlmann sits down with Giuseppe Licari from Merck Healthcare, whose expertise is quietly reshaping how proteins reach the clinic.
Giuseppe Licari brings a hands-on perspective to computational formulation development. With a track record in applying molecular dynamics simulations to real-world drug development, he’s not just theorizing about the future—he’s showing what’s possible now.
We’ve seen that in silico methods have been around for many years, and they are now a standard tool to support our work across several steps of drug discovery and development. I think they’re here to stay for the years to come.
So my message is: don’t be afraid to use them, explore them, and be curious. If these methods are helpful, why not embrace them?
David Brühlmann [00:00:36]:
Welcome back to Part Two with Giuseppe Licari from Merck Healthcare, where we’re tackling the toughest formulation challenges in biologics development. Check here part one of our conversation.
How do you predict aggregation before manufacturing? What can simulations tell us about excipient-protein interactions? And when is it time to stop computing and start experimenting in the lab?
Giuseppe shares practical workflows, real success stories, and honest limitations of computational approaches. Plus, he’ll give one actionable step you can start using tomorrow.
David Brühlmann [00:01:14]:
Let’s jump back in. So we’ve done our homework: we’ve shown that our molecule is developable, we’ve assessed formulatability, and now we’re developing the proper formulation for the recombinant drug.
What are the specific in silico approaches you’re using? It feels to me like a very difficult puzzle — so many chemical components, different concentrations, and combinations. How do you find the needle in the haystack?
Giuseppe Licari [00:03:03]:
First of all, it’s important to remember that for formulation development, you need to look at how the protein behaves in its environment over time.
You can’t base your assessment on a static picture of the antibody — you need to watch the “movie.” In our field, this is generally done using molecular dynamics, a technique in computational chemistry that allows you to see how molecules move. You can literally see the protein dancing, if you want to imagine it like that, and observe how its conformation changes over time.
When you add excipients or buffers, you can see how those elements interact with the protein. From these interactions, you can extract conclusions about how the excipients might affect protein stability or alter its properties.
This is a critical point: looking at the protein “in motion.” In many ways, it’s like performing an experiment — but computationally. You simulate what happens in the lab: taking your drug substance, putting it in a specific environment, and observing its behavior over time.
Of course, it’s not exactly the same as the lab, but it’s a semi-realistic representation of reality. And it can still provide valuable, actionable insights that help guide your experimental work.
David Brühlmann [00:04:49]:
And how do you go about real stability studies? This is what takes time — you can’t compress it. Well, obviously, you can use stress conditions, but it still takes time to confirm outcomes. How do you combine in silico methods with real stability studies?
Giuseppe Licari [00:05:08]:
Of course. Real-time stability studies, like those used to assign shelf life, can’t be directly computed or simulated using in silico methods. One limitation is that simulations can only cover short periods of time — you can’t simulate six months of stability. So, that’s out of the scope of computational methods.
However, long-term protein stability is closely tied to the intrinsic properties of the molecule. That’s what we aim to study: which molecular properties correlate with long-term stability. Once you understand these connections, you can tweak formulations to adjust the protein’s behavior and get an estimate of long-term stability, even if the simulation only covers a short time. This is the strategy we typically follow.
David Brühlmann [00:06:11]:
Looking ahead, our industry is evolving rapidly with all kinds of technologies — AI, machine learning, and so on. Where do you see formulation development going in the next few years? How will we develop formulations for recombinant proteins?
Giuseppe Licari [00:06:36]:
In the AI space — with machine learning — there are several efforts to predict protein formulation behavior. You need a significant amount of data to predict outcomes, like aggregate levels or low molecular weight species. One current limitation is that we don’t yet have enough data to build models that are robust across many proteins and systems.
That said, AI is already helping in discovery. Generative AI can design proteins with fewer chemical liabilities and improved developability. Improvements early in discovery will have significant downstream effects, including on formulation development and other steps. Optimizing proteins from the start can make the entire process faster and more efficient.
David Brühlmann [00:08:06]:
Yes, and with more advanced generative AI models and more powerful computational techniques, you might even be able to select the optimal sequence and predict its ideal formulation — if I’m thinking futuristically.
Giuseppe Licari [00:08:27]:
Absolutely. I’m looking forward to when in silico methods can predict the optimal formulation and experiments are only needed to confirm the predictions. Right now, we still need some screening and lab tests, but in a few years, we might be able to reduce lab work significantly. This will reduce timelines, lower costs, and allow us to develop more molecules for patients more efficiently.
David Brühlmann [00:09:10]:
And to make our conversation very actionable. I would like now to look into how, for instance, someone who is working in a smaller company could apply that. Because the challenge is always, especially as we look in the future. It’s exciting, we have amazing new technologies. You could do a lot of things. And I think in a larger company — that’s also my experience — you’re pretty fortunate to have a lot of resources. But when you’re working in a startup or a small-to-mid-sized company, you have more limited resources. So what would be your advice to still leverage at least some of the potential of these in silico approaches, even with a smaller budget or without all these specialists in-house?
Giuseppe Licari [00:09:55]:
Yeah, sure. That could of course be a problem for small companies or startups. Maybe the solution would be to do a small feasibility study with an external provider. Nowadays, there are more and more companies providing software-as-a-service, for example, so you can test these approaches through a third party and see if they provide additional information or valuable outcomes for your project.
That’s something achievable even for smaller companies, because in silico methods are generally not very expensive computationally. You don’t need to invest too much to test a few things. So my advice would be to test with external companies and see if it works. Of course, the best solution is to hire a computational scientist internally to really build internal knowledge. It’s always better to have someone in-house, but we need to make compromises all the time.
David Brühlmann [00:11:02]:
Definitely. Absolutely. Before we wrap up, Giuseppe, what burning question haven’t I asked that you are eager to share with our biotech scientists?
Giuseppe Licari [00:11:12]:
Well, maybe “what comes next” in this field. That could be a burning question. My answer would be that with new machines, GPUs, and computational power increasing continuously, in the future we’ll be able to simulate bigger systems for longer periods.
I think simulations will eventually reproduce nearly any step in the development space, supporting more and more phases in drug development. With increasing computational power, we’ll be able to do more and more. I’m really looking forward to seeing how this field evolves in the years to come.
David Brühlmann [00:12:02]:
Giuseppe, what is the most important takeaway from our conversation?
Giuseppe Licari [00:12:08]:
I’d say that in silico methods have been around for many years and are now standard tools to support work across drug discovery and development. They’re here to stay. My message is: don’t be afraid to explore them, be curious, and use them when they’re helpful.
David Brühlmann [00:12:44]:
Yes, scientists, why not use these technologies? Giuseppe, where can people get a hold of you?
Giuseppe Licari [00:12:52]:
LinkedIn is the easiest way. People can search for my name, and I’m happy to exchange with anyone curious about these techniques.
David Brühlmann [00:13:03]:
Excellent. Smart Biotech scientists, please reach out to Giuseppe to exchange on in silico approaches. Once again, Giuseppe, it’s been fantastic. Thank you so much for being on the show today.
Giuseppe Licari [00:13:15]:
Thanks to you, David, for what you do and for the invitation. It was a pleasure for me as well. Thank you.
David Brühlmann [00:13:23]:
What a masterclass in computational formulation development. Giuseppe has given us a roadmap from theory to practice, showing how in silico approaches are becoming indispensable tools in the biotech scientist’s arsenal.
If these insights resonated with you, take 30 seconds to leave a review on Apple Podcasts or wherever you’re listening. Your feedback helps us bring more expert conversations to the biotech community.
Thank you for leaving a review, and thank you for tuning in. Until next time, keep making bioprocessing smarter, one innovation at a time.
Smart scientists, that’s all for today on the Smart Biotech Scientist Podcast. Thank you for joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite platform. By doing so, you help empower more scientists like you.
For additional bioprocessing tips, visit www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Giuseppe Licari
Giuseppe Licari has served as a Principal Scientist in the Computational Structural Biology group at Merck KGaA since 2022, where he helps design and implement digital tools to analyze biotherapeutic molecules. His work includes studying how various excipients contribute to protein stabilization, with the goal of informing and improving formulation development.
Before his time at Merck, Giuseppe worked at Boehringer Ingelheim, where he helped establish computational methodologies for assessing developability and forecasting protein behavior through in silico modeling.
He completed his PhD in Physical Chemistry at the University of Geneva in 2018, followed by a postdoctoral role in the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana–Champaign, focusing on molecular simulations of proteins interacting with biological membranes.
Connect with Giuseppe Licari on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
Imagine trimming years off biologics development—and catching problematic formulations long before the first pipette is even picked up. That’s the promise of computational approaches in protein drug development, shaking the dusty traditions of trial-and-error and ushering in a smarter, more collaborative era.
For this episode, David Brühlmann welcomes Giuseppe Licari, Principal Scientist in Computational Structural Biology at Merck KGaA. A chemist by training, Giuseppe Licari pivoted from hands-on wet lab science to the predictive power of quantum mechanics and in silico modeling.
Today, he stands at the intersection of computation and CMC development, pioneering digital tools to streamline candidate screening, de-risk formulation, and ultimately bring therapies to patients faster.
The change in perspective is that we are now going from having several sequences in developability to having only a single sequence. So that’s the big change in discovery. We have several sequences, and now we need to apply methods to select only one. Then, in development, we have only that one selected sequence — we cannot change it anymore. So that is a very big change.
Historically, there has been a lot of work in the literature on mutating the protein to improve the characteristics of the API. But once the sequence is fixed, there is not so much in the literature on how we can support formulation development under that constraint.
David Brühlmann [00:00:46]:
What if you could predict formulation failures before ever touching a pipette? Today we’re diving into the computational revolution transforming biologics development with Giuseppe Licari, who is a Principal Scientist in Computational Structural Biology at Merck KGaA.
From predicting aggregation hotspots to designing stable formulations in silico, Giuseppe reveals how computational approaches are slashing development timelines and catching problems that traditional methods miss.
Let’s explore how smart science is making formulation development faster, smarter, and more predictable.
Welcome Giuseppe — it’s great to have you on today.
Giuseppe Licari [00:02:42]:
Hi David, it’s my pleasure to be here with you, and thank you for the invitation to your podcast.
David Brühlmann [00:02:48]:
Giuseppe, share something you believe about bioprocess development that most people disagree with.
Giuseppe Licari [00:02:56]:
Well, in my field of drug product development, I believe we should set a “good enough” stability standard for our API to ensure we deliver the product safely and in a timely manner.
We don’t always need to maximize shelf-life stability — at least not for preclinical or Phase I studies. People might disagree and try to maximize shelf-life even early in the project, but I think that in Phase I we don’t need that.
Instead, we should aim to deliver the product as fast as possible, in a safe manner of course, to the patient and see if the project works.
David Brühlmann [00:03:46]:
Yeah, you’re making a great point — and I think it’s important to have a phase-appropriate approach, isn’t it?
Giuseppe Licari [00:03:52]:
Yes, because again, the problem is that we never know if a therapeutic concept will actually work, and we spend so much time and effort at the beginning of a project — and then the project may be stopped because there is no efficacy. So I think we need to target the right amount of effort according to the phase we’re in. And this is true for any function, for any step of the process. Of course, people have different views on this, but I think that as long as we deliver something safe for the patient, we are good.
David Brühlmann [00:04:26]:
I'm looking forward to diving further into today's topic — into developability and also formulatability. But before we do that, Giuseppe, let’s t alk about yourself, because your path from physical chemistry to computational structural biology is fascinating. So take us back to the beginning and tell us what sparked your interest — and what were some interesting pit stops along the way.
Giuseppe Licari [00:04:54]:
Yes, I think it started during my undergraduate studies, when I first encountered quantum mechanics and theoretical chemistry. I'm a chemist by education, and in those courses I discovered the fascinating capability of these techniques to predict molecular properties without performing any experiment.
From the computer alone, we could calculate something “out of the blue.” That was incredibly fascinating to me and sparked my interest in in silico and computational methods.
At the same time, I had a genuine interest in developing new drugs to help patients. So I tried to combine these two passions, and I became more and more interested in computer-aided drug discovery.
Of course, I also worked in the lab during my undergraduate studies and during my PhD, but over time I leaned more and more toward computational work.
A very important pit stop in my career was my three-year postdoc in the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana–Champaign. I learned a lot there, gained extensive experience, and it opened up many perspectives for me. That was probably one of the most important parts of my career.
David Brühlmann [00:06:24]:
Can you paint us a picture? Because you're now at the intersection of computational biology and drug development, including formulation development. What does a typical day look like? Are you sitting in front of a computer all day doing modeling? Do you go into the lab? Is it a combination? What does that look like?
Giuseppe Licari [00:06:46]:
Yes — and importantly, what a computational chemist in a pharmaceutical company should not do is sit in front of the computer all day.
I truly believe it’s essential to constantly exchange with bench scientists, because we need to understand what is most valuable for them and where computational work can really make a difference.
So my daily work involves a lot of interaction with people in the lab, understanding their needs, and figuring out how computational approaches can support them.
Once we identify a need — for example, a specific screening or a particular question in a project — then I work on my side to carry out the in silico assessment. I provide my conclusions and recommendations, and then we discuss again and plan the corresponding lab activities together.
So it’s really a continuous exchange between the computational scientist and the lab scientist.
David Brühlmann [00:07:52]:
Let’s unpack this: developability, formulation development, in silico approaches. Starting from the very beginning — where do in silico approaches shine the brightest in drug development?
And when I say drug development, I include process development and the broader CMC landscape. You have seen many parts of biologics development, so where do you see the greatest benefit of these computational approaches?
Giuseppe Licari [00:08:23]:
First of all, in silico approaches are really vast, and there is a lot that can be done and applied in pharma. I’ll focus on the approaches that are most related to what I do. You mentioned this concept of developability. Maybe not everyone is familiar with it — it’s a relatively new way of thinking. We want to develop drugs that are safe, efficacious, and manufacturable, and the concept of developability was introduced a few years ago to help select a candidate with the highest overall developability profile.
From the experimental perspective, we can run many assays to understand how developable a drug is. However, we can also use in silico methods to screen properties of the API that can predict this developability profile.
So one major application is screening candidates in the final stages of discovery, when we might have, for example, 4 to 10 molecules. In silico methods can be very helpful in prioritizing the candidates — identifying the ones that might be more developable and more manufacturable.
Once the final candidate is selected and development officially starts, we can no longer change the sequence. But we can still apply several in silico approaches to help develop the best formulation. In this case, we don’t modify the sequence, but we can adjust what is around the API — the pH, the ionic strength, salts, surfactants, excipients.
So in silico methods can help filter out conditions that might not be favorable for your API.
David Brühlmann [00:10:25]:
And that’s such an important point — this concept of developability. For those who know me well, they know I’m really passionate about this topic because I strongly believe in starting CMC development early, already in discovery.
Doing this homework early and looking at the molecule’s properties ensures that it’s developable and, ultimately, manufacturable at larger scale.
Can you tell us what is typically evaluated in a developability assessment? What are the minimum protein characteristics you should analyze to make sure your molecule is developable?
Giuseppe Licari [00:11:06]:
Yes. There are several properties we can predict. For example, we can look at the hydrophobicity of the molecule and identify regions that are aggregation-prone; if necessary, we can mutate specific residues to remove these aggregation-prone motifs.
We can also predict the colloidal stability of the molecule — typically by looking at the charge distribution at different pH values, which gives us an idea of how stable the molecule might be in solution.
We can evaluate the chemical stability of the molecule, especially the residues in the CDRs — the complementarity-determining regions that interact with the antigen. These are crucial for antibody efficacy.
We can also assess immunogenicity, using several available computational techniques.
So yes, there is a wide set of properties we can predict, and these predictions can be very helpful in prioritizing the candidate. And you’re absolutely right that this must be done as early as possible — ideally with input from people in development.
This exchange between research and development is really critical, because development scientists can already provide insights related to the formulatability of the molecule. So it’s not only immunogenicity; it’s also whether the candidate can be formulated under the conditions required later in development.
David Brühlmann [00:12:41]:
Oh yes, absolutely — you’re speaking my language here. It’s absolutely crucial. And I’ve unfortunately seen projects where this wasn’t done early enough, and the consequences were severe.
So for anyone listening: start early. If you’re in R&D, communicate with process development and manufacturing colleagues early on to get their input.
Now, I’m curious — let’s take aggregation as an example. When you do these in silico predictions, how accurate are they? And how much wet-lab work is still needed to confirm them?
Giuseppe Licari [00:13:23]:
Sure. The predictions can be quite accurate, but of course no prediction is ever 100% accurate. It depends on the methods you use.
Some approaches are sequence-based, meaning you don’t need the structure of the antibody to predict aggregation. But you can also use the 3D structure, because residues that are far apart in sequence may be close in space and form hydrophobic patches — something you cannot detect from sequence alone. That provides additional insight.
A good way to improve accuracy is to combine information from different methods — integrating sequence-based, structure-based, and other computational models into a holistic assessment.
From my experience, hydrophobicity can generally be predicted quite accurately. However, it’s also important to note that experimentally, hydrophobicity is difficult to measure directly, because aggregation isn’t driven by hydrophobicity alone — electrostatics and other factors play a role.
So when comparing predictions against experimental results, we need to keep in mind that the experimental measurement is itself a composite of multiple contributions.
David Brühlmann [00:15:00]:
And I imagine that the more experiments you run and the more data you generate across different molecules, the better the predictions become — especially as you incorporate hybrid models or even machine-learning approaches to improve accuracy further.
Giuseppe Licari [00:15:19]:
Exactly. If you use machine-learning models, then you really need a significant amount of data. You can associate many properties of antibodies to those data sets, including electrostatic contributions, and this may improve your predictions. This is already being done in several methods.
I think the biggest challenge is actually finding the data — and finding data that is representative of all the possible APIs we might have in development. Nowadays we don’t only have standard monoclonal antibodies; we also have many multispecific formats, ADCs, and other new modalities.
The issue with machine learning is that, once you train your model on certain categories, the predictions may not extrapolate well to new modalities. That’s why I really like physics-based methods — because you can extrapolate. You don’t need experimental data to train the model; you rely on the underlying physics, and you can still generalize to new molecule formats.
David Brühlmann [00:16:36]:
Your work has now evolved from developability into formulation development and formulatability. We’ll talk about formulatability in a moment. I’m curious — how different is your work now, and your in silico approaches, when the goal is to develop a formulation? Having the right formulation is such an important part of CMC development.
So let’s start there. How different is this compared to developability? And then I want to move on to the next question: What are the specific approaches used to come up with a formulation that will work for your biologic?
Giuseppe Licari [00:17:17]:
The change in perspective is that in developability we start with several sequences, whereas in formulation development we work with only one sequence — the selected drug candidate. That’s the big shift. In discovery, we apply methods to select one molecule among many. Once we enter development, we can no longer change the sequence.
Historically, there has been a lot of work on modifying or mutating proteins to improve API properties. But once the sequence is fixed, there is much less guidance in the literature on how to support formulation development.
That’s the space you’re asking about — how to support formulation development using in silico methods. Now the idea is not to change the protein, but to change whatever is around it. The protein is fixed, but in a formulation it “feels” a specific environment — a given pH, buffer species, salts, excipients, surfactants. All of these may influence its behavior.
I am really convinced, and I have plenty of evidence, that simulations and computational approaches can help us understand what happens to a protein in a given environment. That’s the shift when moving from developability to formulation development in silico.
David Brühlmann [00:19:02]:
Earlier you mentioned a phase-appropriate approach. So how early should formulation development start? For example, in Phase I, should you use something “off-the-shelf,” like a platform formulation? I imagine this is easier for antibodies — but what about more complex molecules?
Giuseppe Licari [00:19:25]:
A platform approach can work for standard molecules — for example, for typical monoclonal antibodies. But when you have complex multispecific molecules, as we increasingly see in the clinic, it becomes more challenging. The platform formulation may or may not work.
In silico methods can be very helpful for de-risking your strategy and adjusting your planning. You can start with a broad platform and then use computational tools to filter out conditions that might be less favorable for your specific molecule.
Even for Phase I, you can use in silico approaches to fine-tune your strategy. The advantage is that you can apply computational methods at any time — you don’t need material, and they are relatively fast.
For Phase II, Phase III, or later stages, you can intensify experimental screening and rely more on computational support as needed. But at any phase, you can always go to in silico methods to gather useful information.
David Brühlmann [00:20:46]:
For those not familiar with formulation development, can you explain the difference between formulation development and formulatability? And when should each be performed? Or are they done together?
Giuseppe Licari [00:21:03]:
Formulatability is a relatively recent term, introduced in parallel with developability. It aims to evaluate whether a molecule can be easily formulated during development. So formulatability is assessed together with developability when screening candidates before development starts.
It gives you a forward-looking perspective: Is this molecule feasible to formulate under standard conditions? Or will it be challenging? That’s what formulatability tries to address.
Formulation development, on the other hand, is a work package executed during development — typically within the drug product development group. It is the process of identifying the best suitable formulation for a specific API. Any prior knowledge, including formulatability assessments, is extremely helpful for planning these experiments.
David Brühlmann [00:22:14]:
That wraps up Part One of our conversation with Giuseppe Licari. We’ve explored how computational methods are revolutionizing developability assessments and identifying formulation risks early.
In Part Two, we’ll dive deeper into excipient selection and real-world implementation strategies. If you found value in these insights, please leave a review on Apple Podcasts on your favorite platform.It helps other scientists like you discover these conversations. See you next time in Part Two.
All right, smart scientists — that’s all for today on the Smart Biotech Scientist podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite platform.
By doing so, we can empower more scientists like you. For additional bioprocessing tips, visit www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in the next episode. Until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Giuseppe Licari
Since 2022, Giuseppe Licari has been a Principal Scientist in Computational Structural Biology at Merck KGaA, where he leads efforts to build computational platforms for characterizing and screening biotherapeutic candidates. His work also explores how excipients influence protein stability, providing key insights that guide formulation development.
Before joining Merck, he contributed significantly to Boehringer Ingelheim by advancing in silico methods for developability assessment and predictive modeling of protein properties.
Giuseppe earned his PhD in Physical Chemistry from the University of Geneva in 2018 and later completed a postdoctoral fellowship with the Theoretical and Computational Biophysics Group at the University of Illinois at Urbana–Champaign, where he focused on simulating protein behavior at biological membranes.
Connect with Giuseppe Licari on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
What happens when therapeutic innovation meets real patient urgency? In this conversation, the barriers between scientist and patient all but vanish, bringing clarity—and a new sense of mission—to some of the biggest problems facing advanced therapy manufacturing and delivery.
Meet Jesús Zurdo, a biotech leader whose three decades of experience in innovation took on a whole new perspective when he became a leukemia patient himself. Seamlessly straddling the worlds of industry and patient care, Jesús Zurdo brings a refreshingly honest, systems-level view to cellular therapies, manufacturing bottlenecks, and the realities of getting therapies from the lab to bedside.
For me, the key thing is we are dealing with complex realities and this requires complex solutions. And probably we need to be humble, all of us, I mean all stakeholders. I am a scientist or a professional as a patient about what we can contribute and what we can't. I think we need more people challenging the system, practices and views.
We need to be critical, but we need to be humble about what solutions we bring. I can try to identify holes, but it would be a bit naive for me saying, „oh, I have this solution”, but I can bring it perspective. And I think by getting different stakeholders, manufacturing developers, clinicians, patients into the same room and just looking at what is failing, what is working, what would be the ideal solution, we will be able to develop much better therapeutics.
David Brühlmann [00:00:50]:
In part one, Jesús Zurdo shared how becoming a leukemia patient rebranded—rewrote—his professional mission after three decades in biotech innovation. Now, as both treatment receiver and industry insider, he is tackling the manufacturing and delivery challenges head on. Can point-of-care production work? Will allogeneic therapy solve scalability? What business models could actually democratize access? His patient urgency pushes these conversations beyond theory into practical solutions that could transform advanced therapy delivery.
I would like to talk about a slightly different aspect, but you said, well in this frame, how can we do it without bringing the patient to the clinic to measure all these metrics? This leads me to this point because we hear often about point-of-care manufacturing, especially with stem cells, CAR-T, and so on. What is your perspective? How should this evolve and how can this solve the affordability and more importantly, the accessibility crisis.
Jesús Zurdo [00:03:15]:
I have some points of view and I can share some experience that I come across. I'll tell you one big realization and we were talking about. For me, realizing how stem cell registers have been operating for decades now, very effectively, and how they provide cells to patients. I mean, you look at the quality assessment and the batch release. I mean, my dose of CE was a batch and they had to do some testing before they released. But you cannot just do the classical sterility testing. You cannot do everything you would do traditionally in pharma and talking to people, friends that are working in CAR-Ts that they had all this quality release.
And that adds a lot of time. And that means that you need to freeze the cells and you do all this testing and then you release the batch, but you use lots of sample in testing and then you go and take it to the patient. That adds a tremendous amount of time, that adds a tremendous amount of work and cost. And now if you look at how some people are doing this point-of-care and I mean, there's several examples here, current cross promoting this, the Hospital Clinic in Barcelona, they've been, I think they treated 600 patients or something like that by now, which is pretty impressive, out of a single hospital.
There was this, Galapagos was promoting something similar. Unfortunately, they stopped that. Then this is a different paradigm because you have the apheresis at hospital and then you use the fresh cells, you modify them, purify them, and immediately you put them back into the patient. Because using aseptic technologies, I mean, it's a question of risk assessment, which is what you always do in medicine. You need to test if you've done your validation up front. And then what they see the risk of infection because of bacteria or whatever virus this way into sample is negligible.
Of course you need to validate this, but once you've done that, that means that you cut time tremendously and you can do everything in the hospital. And the only thing you need is to have the viral vector. But this means that you can centralize the viral vector as the key ingredient, make sure the quality is right, and then decentralize the final manufacturing step. It brings down costs, it brings down time, and importantly, the patient doesn't have to wait so long. I mean, there are horrible examples of hospitals where patients die because they don't have the treatments fast enough. So to me, it's not necessarily the solution for everything.
But clearly for autologous cell therapy, it could be a game changer. Not for everything, not for everybody. But in some cases, some experience showing that this has promise and is leveraging what is being in use for many years. I mean, don't reinvent the wheel. Just look at what people have been using. It works. Now let's look at the difficult stuff that is getting in the way.
David Brühlmann [00:05:52]:
And to what extent could we develop more allogeneic therapies? Obviously it's not possible for everything, but maybe in certain cases, instead of an autologous therapy, we could move over to allogeneic and then produce that centrally and ship it, for instance.
Jesús Zurdo [00:06:08]:
Well, we can, and there are examples. Unfortunately, it doesn't seem that the allogeneic cell therapies are getting traction. And I don't know, there are different problems. I mean, on one hand, not every allogeneic cell therapy is built the same way. And there's some cases where probably too much editing or I remember HLA knockouts are not a great idea because then your body thinks you're bringing another cancer. But there's some promises there. It can help in some regards. But I think, you know, it's not going to be a magic bullet.
What I like about allogeneic is that it is off-the-shelf. You have it ready. That means that immediately you can give it to the patient, you shorten the intervention time. And I do believe that, I hope there would be some—I don't know whether it's gamma delta T cells, or it's going to be NK cells, or it's going to be an edited cell, or a combination of all this—that would make it… there is promise there.
However, I think probably in vivo cell therapy has more chances of succeeding. And my take on this is that I think it could revolutionize how cell therapy takes place.
Maybe not how people say it, because I had this conversation—how it's going to bring cost down. I disagree. I mean cost, yes, not price, because we price… I mean, we talked about this before. Pricing of medicines is different. And you look at what is the price of some viral therapies right now which require doses, clearly they do not represent the cost of manufacturing.
However, one thing to me as a patient is transformational is you might not need to use conditioning or lymphodepletion in cancer, but also in autoimmune diseases. And this is huge. It's huge because it reduces risk to patients, it reduces mortality linked to infections, and this is really important. But also it reduces the impact of some of this chemo in your brain, in your body—generally you are stronger, you're able to deal with things in a better way.
And also I like the flexibility that brings. You can be very creative, you can bring multiple—I mean, I know multiple CARs, you can do multiple dosings. Now the issue I think that probably we are not considering enough is the delivery. The delivery remains a problem and no matter how much engineering we would do into the vectors, doesn't matter which LNPs, some viral vectors, there's going to be always some off-target delivery. And this is something we've seen in ADCs in the past. And there were issues with the heart toxicity and liver toxicity, etc.
Now when we're having genetic medicines, this is a different story. If your cargo is integrated or has a genetic impact in the wrong cell type, maybe that is not desirable. I mean maybe I'm worrying unnecessarily, but the problem is translating observations from a lab or animal model into a patient is not trivial.
However, there are options like ex vivo at bedside that people are exploring. And I'm a firm believer that in vivo and the right delivery and the right vehicles could transform completely how autoimmune and how some oncology conditions are treated. I mean I'm really hopeful. I am impressed about the results people are observing.
David Brühlmann [00:09:24]:
Yeah, it's amazing. And it's also amazing to see how fast it moves, how fast it evolves. When you look across the industry, Jesús, what are the trends you see with respect to new manufacturing technologies, with respect to new delivery methods, with respect to new ways to bring the drug to the patients? What is hot right now or where do you think the industry is moving to?
Jesús Zurdo [00:09:49]:
I think this is where I'm a bit disconnected and I don't know if it's… I'm old fashioned but we were talking about overengineering or is just what is the purpose of the innovation you're introducing? And I need to be careful. I think automation, there are beautiful solutions out there, more companies getting solutions. It's very impressive what these platforms can do. And I think automation has a place even at point-of-care manufacturing because that means that you reduce risks, you reduce the human elements. So what I was saying before, people are putting too much emphasis in this. Automation is not a solution to price of goods but it's an important element to introduce in manufacturing consistency. But the problem is that it has to be agnostic.
You should be able to use whatever automation for a given product so you're not having to go through the barrier or buy new equipment in order to manage different cell therapies you are administering out of a single hospital. And this is a problem the industry has to reckon with. We need to have standards. We need to have like in computers, I suppose everybody can use a USB port. So we have an understanding what are the products starting—the apheresis, if you will—that people can start and then you have whatever automation solution but the standards are maintained.
The other I think is, I mean we were talking about in vivo. There is lots and lots of work done and it's fascinating what people are doing these days with these nanoparticles and how they're engineered. Again, I think they have a place. I think they are super cleverly designed. Some of them—it’s fascinating how much science is put in there. My question is again, are we overengineering these things? What problems are we solving? I was talking before about delivery. These solutions, sometimes they retain significant delivery challenges but also other aspects of durability of response, et cetera. We are maybe trying to get the perfect solution before finding what is the real problem. And I was saying maybe a hybrid between in vivo and ex vivo is something that would have a bigger impact and produce much better outcomes to patients.
Going back to patient urgency, rather than going for a super sophisticated technology that would require lots of testing and validation. I mean if I have a new—and I don't want to demonize nanoparticles, the same goes with viral vectors—doesn't matter which platform you have. If I have a super innovative delivery platform, I would need to show that it's safe, that it doesn't go to the wrong place, et cetera. And then the question is which patient will accept to be treated? I mean I'm talking about a patient that is not suffering or a healthy volunteer. It becomes challenging. I would not volunteer for that, but it has to be tested. What are the limitations of these platforms?
At the same time we have solutions that are already working. So why don't we combine some of these super cleverly designed vectors with simpler platforms that can ensure fast adoption in the clinic and then see what we can do with in vivo cell therapy or ex vivo or bedside or whatever. To me the challenge is being pragmatic and recognize the urgency and go step by step. Yeah, let's make sure we can validate physiological effect and then we refine the delivery in due time.
David Brühlmann [00:13:14]:
What do you think will have the biggest impact right away? Because my feeling is there's a lot to be done. It will take time. Is there something that stands out that you think will have an immediate effect and will move the needle significantly?
Jesús Zurdo [00:13:31]:
Two things: 1. Healthcare provider capacity. You cannot make a significant improvement in the adoption of cell therapies if hospitals cannot administer to patients. And this is something that is hidden. People assume it's not just pricing. You could price it whichever way you want, but if the hospital cannot give it to patients because they don't have the right infrastructure, the training or the capabilities, forget it. It will never happen. And now we see it in Europe—clearly problematic in the UK—when our healthcare systems are limited in the amount of money they receive, you tell them you need to invest now in building capabilities, for example for autologous, which is what is right now also maybe in the future for other types of cell therapies, then who's going to pay for that? So that is a big, big bottleneck.
The other—maybe making it… I'm so hopeful—looking at some bold clinical trials. Unfortunately not many in Europe or in the US. I see brilliant things being done in China. Innovation that is coming out of China is unbelievable. Using some of these treatments as first line and it's mind blowing what they observe in some conditions. I mean, you need to take things with a pinch of salt, but it's how they combine—I’m more familiar with the CAR-T arena—how they combine multiple CARs, how they combine, how they administer the treatment, how they combine with other drugs. And there are cases where they're using this as first line for multiple myeloma, for ALL in some cases, without the need of chemo. And they see some impressive remission and good survival without symptoms. And this is really important for me. Now, early days, but I think this, to me, it shows the promise. This could be really revolutionary.
But I know we need to be prudent. We cannot just go completely crazy. But there are, I think there's a case for some conditions to really move these treatments earlier. This is another thing I found out as being a patient. Yes, it's good to have another weapon, if you will, in reserve if the first line of treatment fails. The problem is that these treatments are really, really hurting patients. By the time they are eligible because they have maybe a couple of relapses already, they have issues with their kidneys, they have liver problems and that means that they are too weak in some cases to settle to receive these therapies.
And even if you said, you know, we're going to try anyway, it's less likely they will survive. So if we were using these treatments early on, maybe we would give them a better chance to survive the disease. And I think this is maybe changing and I know many clinicians already promoting this, but they are a bit alone. I mean, there's lots of things need to happen from a regulatory perspective, from a health economics perspective, from a payer’s perspective, that makes it acceptable to provide these treatments early on. And I think this could be transformational.
I also believe that we need to do better—or we need to do more—to improve the cell therapies that are already in the clinic. They are brilliant, but they are not as fantastic as many people think. But we have new knowledge and this is why I think it's important that patients are treated because we learn. We learn when they work, when they don't, we learn about the limitations and that will help innovation, that would help in our second or third generation of therapies.
David Brühlmann [00:16:59]:
Yeah, I believe that if we figure out the economic side and then obviously some other safety side to use these powerful new modalities early on, earlier treatment, and also, as you said, first in line and not end in line, I think this will be a total game changer. I do hope that we are getting there sooner than later. So this has been great. Jesús, before we wrap up, what burning question haven't I asked that you're eager to share with our biotech community?
Jesús Zurdo [00:17:31]:
For me, I think you touched really the important stuff. I think there were very pertinent questions to the point. For me, the key thing is we are dealing with complex realities and this requires complex solutions. And probably we need to be humble, all of us, I mean all stakeholders—I as a scientist or a professional, as a patient—about what we can contribute and what we can't. I think we need more people challenging the system, practices and views. We need to be critical, but we need to be humble about what solutions we bring.
I can try to identify holes, but it would be a bit naive for me saying, „oh, I have this solution”, but I can bring perspective. And I think by getting different stakeholders, manufacturing developers, clinicians, patients into the same room and just looking at what is failing, what is working, what would be the ideal solution, we will be able to develop much better therapeutics.
And particularly, I want to emphasize patient side—for me it has been enlightening how you use these therapies and why people are not receiving it or when they are, even when they're eligible, what happens, why the efficacy can be down? Well, because the reality, the experience in the clinic and at home. And I think this will increase the value of our efforts hundredfold. No doubt about it.
David Brühlmann [00:18:45]:
Jesús, what is the most important takeaway from our conversation today?
Jesús Zurdo [00:18:51]:
I would say, remember, we all are or will be patients. This is important. It's not I'm a scientist or I'm a professional or I'm a clinician and then I'm working for somebody else's benefit. No, no, no. At some point in my life… I will be a patient. And this I think brings an element of humanity and urgency as well. It's not okay to hope or wait for a number of years—going back to urgency—because patients matter and the need happens now. Cutting corners is not the solution, but it's finding what is the big issue we are facing. So if we see that when we work with patients, we're working with ourselves when that will be us in a few years or is now or it was us in the past, I think that would change the conversation.
David Brühlmann [00:19:38]:
What a great way to conclude our fantastic conversation. Jesús, patients do matter. Thank you for reminding us that patients matter. And finally, what we're doing as scientists is for the patient at the end of the day. And thank you also for giving us this perspective that goes beyond just the science. Finally, we are serving the patients that desperately need these life-saving therapies. And thank you also for sharing your own personal experience. Very powerful.
Jesús Zurdo [00:20:06]:
Thank you, David.
David Brühlmann [00:20:07]:
Where can people get a hold of you, Jesús?
Jesús Zurdo [00:20:09]:
Well, I think I've shared with you my email address. The easiest is to find me on LinkedIn and message me. Easy to reach through LinkedIn and I would encourage anybody that has ideas, interests, initiatives or willing to collaborate, please reach out. I think we are all in this together. I'm really happy to work with other people in finding better solutions.
David Brühlmann [00:20:31]:
Smart biotech scientists, please reach out to Jesús. You find the infos in the show notes and thank you once again Jesús for being on the show today.
Jesús Zurdo [00:20:40]:
Thank you David, it's been my pleasure and thanks a lot for hosting me.
David Brühlmann [00:20:45]:
Jesús Zurdo just gave us a masterclass in reimagining how we manufacture and deliver advanced therapies. His unique vantage point as both innovator and patient reminds us why solving these challenges matters beyond the lab. If this conversation sparked ideas for your own work, we'd love a review on Apple Podcasts or wherever you listen. Your feedback helps us reach more scientists and if you need support in development or the manufacturing of advanced therapies or biologics, please check out the links in the show notes. We are here to help you and thank you so much for tuning in today and I'll see you next time.
All right smart scientists, that's all for today on the Smart Biotech Scientist Podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite podcast platform. By doing so, we can empower more scientists like you. For additional bioprocessing tips, visit us at www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let's continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Jesús Zurdo
With more than two decades of experience in the biopharmaceutical industry, Jesús Zurdo plays an active role in advancing therapeutic development and improving patient access. His background spans cell and gene therapy, cancer immunotherapy, and executive coaching, complemented by the unique perspective he brings as a leukemia survivor.
He contributes to the field as a Non-Executive Director at Telomere Therapeutics and as an Expert Jury Member for the EIC Accelerator Program, collaborating with organizations to progress in next-generation therapies. Committed to genuinely patient-centered healthcare, he combines scientific expertise with lived experience to help drive innovations that deliver real value to patients.
Connect with Jesús Zurdo on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
The biggest breakthroughs in cell and gene therapy may not come solely from innovation at the bench, but from successfully delivering those advances to the people who need them.
In this episode from the Smart Biotech Scientist Podcast, David Brühlmann sits down with Jesús Zurdo, a scientist who’s spent three decades engineering life-changing biotechnologies—but whose outlook on the field shifted dramatically after becoming a leukemia patient himself. With experience on both sides of the system, Jesús Zurdo brings a rare, unfiltered perspective to the persistent gap between scientific promise and real-world patient access.
For me, patient urgency does not mean to cut corners. Again, manufacturing is critical. But and this is where I think I'm starting to disagree with some people. I think we are particularly in advanced therapies putting far too much emphasis in cost of goods, in automation, in sexy technologies, but not enough on how we can scale up.
And scale up is not just number of liters or grams or kilograms of number of cells we can make. It's more how many patients we can reach. How is going to be for them to access this treatment? Do we need to bring them to a city where treatment will happen or would they be able to access it close to their homes? And also how affordable these treatments could be. And this is where things start to become more complex.
David Brühlmann [00:00:52]:
Welcome to the Smart Biotech Scientist. Today's guest brings a perspective few can match. 30 years driving innovation in cell and gene therapies, then becoming the patient on the receiving end. Jesús Zurdo isn't just a drug discovery and technology development veteran. He's a leukemia survivor who has experienced firsthand the gap between groundbreaking science and actual patient access. This dual lens, innovator and patient, gives him urgent insights into why promising therapies aren't reaching those who desperately them. Let's dive in. Welcome, Jesús. It's good to have you on today.
Jesús Zurdo [00:02:48]:
Hello. Good morning. Thank you, David. It's a pleasure for me to be here.
David Brühlmann [00:02:52]:
Absolutely. I'm thrilled to have you on and to talk about an extremely important topic. Jesús, to start us off, share something that you believe about bioprocess development that most people disagree with.
Jesús Zurdo [00:03:07]:
As a tricky one I've been saying, you know, for many, many years now, is that you cannot turn a dog into a star. By this I mean you could define or develop the perfect process, but if your product is poorly designed or conceived, then you won't be able to solve the problems you will encounter later. And then something I lived myself, you had a difficult product, you managed to turn it around to produce it, but then it crashed in clinical development or the patient is not able to use it eventually. Start with the end in mind. Design a good product is always important.
David Brühlmann [00:03:40]:
This is such an important reminder. Start with the end in mind and the end is the patient. And to dive into today's topic, draw us first into your story, Jesús. Because you have 30 years in biotech as an innovator and you also have seen the other side. You have been a leukemia patient yourself and that's quite a profound plot twist. So take us to the very beginning. What got you started in biotech? What drew you in? And then how did that experience, this personal experience, change everything?
Jesús Zurdo [00:04:12]:
Oh gosh, I can go back in time. I'm in love with biology. Actually, like I'm saying now, as a patient, I just find biology is unbelievable. I'm completely in love with it. So from the start for me, be able to translate the biological knowledge into benefits to the life of people was something that intrigued me. And the key thing for me making the move from academia to the biotech industry was when I was at Oxford just realizing that we could design products that would have much better performance.
We had at that point in time, everybody was discovering target and then we realized that we could modify the product so it would have effect in impact in efficacy, of viability, half-life, route of administration. And that to me was like, well, this is something maybe I could help with. So that was the kick in the bum, if you will, just to make transition to the industry.
Your second part of your question, what was the impact of becoming a patient? And I think for me was a cure of humility. It's first of all realizing that what we've been told and we've seen as scientists, that we can develop a magic bullet that will cure this and that, a chimera and that the clinical reality is so complex. I admire physicians, the complexity of issues they had to deal with, but also realizing that every patient is a complete different universe. We cannot generalize not only from the manifestation of the disease, but also how the patient lives with the disease.
That second part is realizing there's a disconnect between the clinical metrics we use to develop therapeutics and the lived experience of patients. What is that patients are dealing with, what is important to them. And that, as I said, it was a bit as a scientist was quite humbling. Realizing gosh, I know so little but at the same time it's been a call for action realizing this is something that people have to hear more. And I think I can bring in my two hats. I can somehow hopefully help in bridging the gap in between these worlds.
David Brühlmann [00:06:15]:
Yeah, and it's so important to bring the two hats and to raise awareness because what I have noticed in my own career, but also talking to a lot of brilliant scientists, especially when you're in CMC, the patient seems very far. Yes, we always say we're developing for the patient, but then in reality you're so focused on your technical bioprocessing problem that actually you forget the patient at the end of the day. So I'm curious because from your perspective now as both a developer and a patient, what does patient urgency really mean and what does that look like? How do we involve a patient early on in this whole process?
Jesús Zurdo [00:07:00]:
I'm guilty of that because I started more discovery. I thought I was, you know, manufacturing probably is not that important. And I had to eat my words because manufacturing is crucial, is central. You cannot make your product, the patient will never be able to receive it if your product is not able to sustain the conditions in which you are going to use it, then the patient will never be able to use it. So your question: for me, patient urgency does not mean to cut corners. And this is the thing, a misconception when people aren't guilty as charged developing new therapeutics, we can go quickly with this product.
In the end we want to test efficacy and that is amenable. But you need to also keep in mind if we have a response, then we need to be able to scale. But also product has to be safe. And this means that you need to be able to have confidence in what you're making. So again, manufacturing is critical. But and this is where I think I'm starting to disagree with some people. I think we are particularly in advanced therapies putting far too much emphasis in cost of goods, in automation, in sexy technologies, but not enough on how we can scale up.
And scale up is not just number of liters or grams or kilograms of number of cells we can make, it's more how many patients we can reach. How easy is going to be for them to access this treatment? Do we need to bring them to a city where treatment will happen or would they be able to access it close to their homes and also how affordable these treatments could be. And this is where things start to become more complex. I think for me, urgency means also how quickly this treatment can be accessible by general population.
And this is where I think probably we spend far too much emphasis. I'm sure we will talk more about it in „I know better or this is slightly better or optimal” or rather than „can we make as an industry an impact in treating this particular condition and what needs to happen?” I mean, I can give you an example that for me has been quite enlightening. I work in manufacturing for many years. I know how much relevance we put into controlling the process, into robustness, etc. But coming across how point-of-care manufacturing is being rolled out by stem cell registers in practice for many years has been enlightening. And for me, and you know how complex some of these products can be. My donor had an apheresis in the US and in 48 hours I had the products by my bedside. I was given my cells. These were living cells, you know, no frozen, no preservation.
This happens every day. This happens across the world in many countries that have limited infrastructure. But it works really well. And to me this is the key thing. Urgency means we can deploy solutions at scale, not we have the best ever perfect solution that is super sophisticated. So it's a compromise between safety, efficacy and the right technologies and doesn't mean that we can do it dirty. It cannot be dirty. It has to be fit for purpose.
David Brühlmann [00:10:15]:
It seems to be that it's also a question of not overdoing or like to have too much quality, I think. Obviously, yes, they need to be safe, yes. And they have to be efficacious. But then don't do too much because finally it's a risk–reward balance at the end of the day.
Jesús Zurdo [00:10:33]:
You touch a very important point and it's quality. Quality doesn't mean cleanest ever, super-measured. It is understanding what the product is. For me, another example is viral vectors. We have developed over the last 20, 30 years much better processes, much better, higher titers, et cetera. But we have missed essential quality attributes and now we are realizing probably these things are behind some of the safety issues we experience in the clinic. I mean we had to increase the doses, but because the active ingredient is such a small proportion of the viral vector we are giving to the patient, then the patient actually receiving a sub-par product is looking at the real critical quality attribute and working on those in ways of addressing those rather than getting the perfect solution, which will never happen.
David Brühlmann [00:11:23]:
Absolutely. I totally align with that. It reminds me of some conversations I had with some quality colleagues. Typically this kind of conversation. I want to touch upon a different part of it because you said, well, we shouldn't focus too much on cost of goods, but we need to talk about cost because that's actually a tragedy. And you mentioned the word accessible. And I think we have an accessibility tragedy because we have this amazing cell and gene therapies now that can treat diseases we have never seen before and instantly and forever. But the problem is they cost everywhere north of $1 million, $2 million, even $3 million. So how are we bringing down the costs of these medicines? And is that the only solution or are there other things we should work on?
Jesús Zurdo [00:12:09]:
I'm going to get in trouble. I'm not a health economist. I want to say that. And I have a biased view, which could be wrong in many respects. This is my point of view. So I'm not saying that cost of goods are not important, particularly with advanced therapies — they are super important and we need to really strive at improving our processes and improving the skills, improving the robustness of how we manufacture. And anything we do there is going to help. But I think it's been an excuse or I don't know if it's an excuse or because it's what is evident. Everybody goes there, „oh, we need to bring down cost of goods and that will bring down the prices”.
And one big argument is that if we do that, then the prices will follow. And I disagree because there are products in clinic that have relatively low cost of goods, but actually have very, very high price. And I think that is how the pharma business model operates. But the issue is that we somehow are treating healthcare, as somebody told me, like a luxury item. And then this value pricing means that if we claim this treatment can cure a person, then why would you not pay for gaining your life?
And okay, valid argument, but to me it has several flaws. One, no treatment out there cures — very few things cure. I mean, you can cure an infection, but you don't cure cancer. And I could argue for hours about this. And many of these curative treatments, actually they are at best managing the disease, which is brilliant. But this is the thing. Then a very different prospect is saying to patients or payers, okay, we manage the disease, the patient can sustain an acceptable quality of life. That is great. We just delete it — didn't happen.
The other thing is that not everybody is able to pay the price that somebody with good economic situation would pay for gaining their lives or whatever. If you go to different countries or you go to people that don't have the right cover, then what happens to them? For me it's a question of humanity, it cannot be economic power. For me, healthcare should be a universal right, not a status symbol. And this means that we need to look at different business models. I mean there are examples even in the States about cost-plus pricing, which is how many businesses operate. It's not so common in pharma but I think in the end everybody has to make a living.
Every business has to be sustainable. And this is where we just look at different ways of pricing these products that capture the value to the patient but actually look at numbers of patients treated rather than margin I can make in a single patient. Maybe we are in the right track. Of course there are many solutions, many options people could explore. But there's something that also you touch on. You know, some of these treatments are multimillion and not necessarily the price that is the obstacle sometimes. I mean I give the example of Casgevy, for example.
There are several other treatments that have been developed for sickle cell disease. Casgevy is unique, it's unique in that it involved patients in understanding the value and impacting the quality of life. This happened in the UK by the way, and this proved to the payers that actually it made financial sense for them. If the cost of managing the disease over a number of years is X, Y, Z and patients can actually have a profound improvement in how their lives happen, then it could work. And the result has been super fast approval.
But also compared to other — won't name names — other similar treatments, the adoption has been far quicker. Is it perfect? No. I mean we are talking about, last time I checked it was 50, 60 patients and sickle cell disease, particularly in Africa, is endemic. So can we treat patients in Africa — 2.5 million — whatever is the price tag? No. So we need to come up with new business models. But for me it's important that the two things are considered. The business model has to be adequate, but also the value to the patient has to be properly reflected.
David Brühlmann [00:16:05]:
Absolutely. And I think there are probably some creative — co-creative — ways to manage that. What comes into my mind, I came across a clinic in India, if I remember well, it's a heart surgeon and they had a similar problem. What they came up with is, as you said, there shouldn't be a two-speed, you know, medicine. So everybody needs to have state-of-the-art treatment. But they came up with kind of a cross-financing system. And at the end of the day, the surgeons were able to do more surgeries a day and became even better. So they finally attracted the best-paying clients. So there are ways to manage that. I'm just wondering about — since you mentioned the sickle cell disease — we hear amazing stories in the news. Does this correspond to the clinical reality or is there a hype or what is your impression?
Jesús Zurdo [00:16:50]:
Oh, I think it's too early. And this is another problem the industry faces. We hear now answers. And there is no coincidence that most of the products in development are to treat cancer. So you can look at remission in weeks, you can look at sometimes survival in months. But are we looking at the right metrics? And I would argue that even for cancer, we are missing the target big time. Because I keep arguing this — for me, and I'll speak now, this is my personal perspective as a cancer patient — I don't care about two, three, four months more of life if my quality of life is not adequate, if I had to be bedridden. So when people say, „oh, we just have this minimal survival”, it's minimalist. If you're telling me now I can keep working, I can keep doing the activities that I find fulfilling, we're talking.
But this is something that is missing. Hence patient perspective is very important. But also this links to cell therapies that have been developed. Short-term remission — yeah, I mean it's impressive. Now when you look at long-term sustained effect and disease-free survival and symptom-free survival, it's a different story. And it's only when you go through two, three years of that you start understanding what is the impact of these treatments. And I think that is the problem. I mean, are we prepared to wait for three years? But also we need to treat quite a few patients in order to really compare this with the standard of care.
And this is a challenge. And this is cancer, by the way. If we go now into other diseases where it's subject to having a crisis or is progressive like in degenerative diseases or autoimmune or in organ degeneration, then it's far more complex because you need to keep monitoring patients for a long time. And I've heard we go into how we price and we price based on outcomes, but that means that you need to monitor patients. And that is actually expensive and a big effort. I think that is the challenge we need to face. How do we monitor efficacy of treatment? From the industry side, I think that's where many companies are struggling, is that this is very difficult to do at scale. It's only, I think, if we bring patients as a partner that actually they can opt in.
And again, Casgevy for me has been a good example. Patients started to say, actually this could be transformational in my life. And then you can measure the right way. You could have more emphasis these days into patient-reported efficacy of treatments. There are, even in the age of wearables, things that you can obtain, you can measure without the patient having to do anything, being at home. There are conditions that involve fatigue, that involve, I don't know, the ability to move, to walk. And this is something that you can measure without the patient noticing.
And you don't need to bring the patient to the clinic and then do all these complicated tests — that maybe they had a bad day, maybe they didn't sleep enough, and you have lots of noise — when you just monitor every day real-life functioning that can be objective but also very powerful. Now, it's not that wearables can solve the problem, but I think there are other angles that definitely we should explore as a society to really assess the value of treatment.
David Brühlmann [00:20:03]:
That wraps up Part One, where Jesús Zurdo shared how his own patient journey reshaped his mission, and we explored the frustrating disconnect between clinical-trial promises and real-world adoption.
In Part Two, we’ll dive deeper into manufacturing paradigms, point-of-care models, and the innovations that could truly help bridge the access gap. If you found value in today’s episode, please consider leaving a review on Apple Podcasts or your favorite platform—it genuinely helps other scientists discover the show. I’ll see you next time.
All right, smart scientists—that’s all for today on the Smart Biotech Scientist podcast. Thank you for tuning in and joining us on your journey toward bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your preferred podcast platform. Your support helps us empower more scientists like you.
For additional bioprocessing insights, visit www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech conversations in our next episode. Until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Jesús Zurdo
With over 20 years of industry experience, Jesús Zurdo is a Director and patient advocate dedicated to advancing therapeutic development and patient access. His expertise spans cell and gene therapy, cancer immunotherapy, and executive coaching, informed by his own experience as a leukemia survivor.
He serves as a Non-Executive Director at Telomere Therapeutics and as an Expert Jury Member for the EIC Accelerator Program, working with organizations to drive progress in advanced therapies. Guided by a strong commitment to meaningful, patient-centered healthcare, he applies both scientific and personal insight to support innovations that truly benefit patients.
Connect with Jesús Zurdo on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
The biotech community is in continuous transformation. The landscape we navigate today will be dramatically different a decade from now, just as it was a decade ago. At the forefront of this evolution are advances in continuous manufacturing, artificial intelligence (AI), and the drive toward personalized medicine.
The recent episode of the Smart Biotech Scientist Podcast with David Brühlmann and Irina Ramos dives deep into these game-changing trends, offering a blueprint for scientists and leaders aiming to thrive in this high-stakes, high-impact field.
In this episode from the Smart Biotech Scientist Podcast, David Brühlmann interviews Irina Ramos, a chemical engineer who has worked across the spectrum of biopharma—from early lab research to global regulatory submissions—and contributed significantly to AstraZeneca’s global COVID-19 vaccine deployment.
I think we need to understand that the biotech community of today will not look the same in the next decade. Just like 10 years ago, this biotech community did not look like it does today.
I would emphasize that we are always learning, always studying, and always seeking solutions that meet the needs of these extraordinary products — products that are bringing us closer to personalized medicine.
We now have cell and gene therapies that can be customized for you, for your genome, for your specific conditions. So how can we, as manufacturers, envision that complexity and utilize innovation today that will solve the problems of tomorrow?
David Brühlmann [00:00:50]:
Welcome back to part two with Irina Ramos. We're continuing our conversation on continuous manufacturing implementation and moving into the broader topic of bioprocessing strategy. How do you build a CMC roadmap that won’t come back to bite you later?
And here’s the big one: How should teams prepare for AI integration without losing the human expertise that makes great process development possible? We’ll cover all that and more. Let’s jump back in.
So, Irina, in the spirit of keeping things simple — what phase-appropriate approach would you suggest? What should absolutely be done in Phase I? What can wait until later? And what are the critical decisions, for instance, that a startup founder has to make very early on to remain compliant as they grow — especially if they’re considering a hybrid or intensified bioprocessing approach?
Irina Ramos [00:02:58]:
Here’s an interesting exercise from the Nimble Consortium looking at different scenarios. One scenario is: what if you take an already approved product and convert that fed-batch process into a continuous one? Another scenario is: let’s use new, early-stage programs to feed into a continuous platform — and if they succeed, then you have a commercial need to scale up.
Both approaches are valid. What I hear from the community, though, is that it’s much harder to change an already approved program. The drivers are different. Perhaps the cost of goods (COGs) or manufacturing efficiency becomes a critical factor — and if you can transition to continuous processing, you might have an opportunity to lower costs.
But what I observe is that if your portfolio is large and you start implementing at the early stage, you typically need to demonstrate fed-batch versus intensified perfusion or steady-state perfusion. Perfusion almost always wins. If you can achieve that higher productivity, it feeds directly into your upstream cost of goods, and then downstream follows like a domino effect.
If you integrate capture next — which I think is the smart thing to do because you’re decreasing volume and concentrating your feed — you might then connect the low pH viral inactivation step afterward. Maybe you use a detergent ahead of capture; there are many possible configurations. You start realizing, “Oh, we already have this part — why not add a little piece here?”
Then you need to decide how you’ll control the end concentration and the diafiltration. Because our drug product colleagues — and if it’s not obvious, I’m more on the drug substance side — already have end-to-end solutions in place for fill-finish, dilution, and compounding.
So when we talk about end-to-end biomanufacturing, we should truly think end-to-end: from cell line development all the way to the drug product vial or even the combination product device.
David Brühlmann [00:05:03]:
Yeah, that’s a great point — thinking truly end-to-end. And what I also hear in what you’re saying is that there are different approaches: you can start small and expand. It’s not an all-or-nothing or one-time approach — it’s a process.
Irina Ramos [00:05:18]:
Yep, that’s exactly right.
David Brühlmann [00:05:20]:
Let’s circle back to your story — specifically your time at AstraZeneca. You led the technology transfer of the AstraZeneca COVID-19 vaccine. I’m curious — what did you learn in that high-pressure, fast-paced environment? What were your biggest experiences and takeaways?
Irina Ramos [00:05:39]:
We worked in a wonderful community of experts who truly wore the mission on their sleeves. We didn’t care how many hours we worked — we cared that we were helping save lives.
At the end of the day, the technical part — while complex and guided by a sort of “playbook” — was only one piece. What truly made it possible were the relationships, the conversations, and the willingness to go the extra mile.
It allowed us to envision a world that was fully connected across continents, where we could manage raw material suppliers and maintain as much order as possible — even with all the constraints we faced at that time: single-use components, supply shortages, and logistical hurdles. We worked together to accelerate wherever we could.
That’s the overarching view, but when you look deeper into the incredibly complex network that AstraZeneca built — and there were some extraordinary colleagues who made it happen — you start to see the cultural and operational constraints that had to be overcome. We really had to appeal to the mission to unite everyone’s efforts.
Regulatory agencies were incredibly responsive — much more than in normal circumstances. Normally, communication takes weeks or even months. But during this time, it was almost like having them on speed dial. We worked together because we shared the same goal: to save lives.
This wasn’t about filing a product for a niche indication — this was about everyone.
We also learned that many technical activities that would normally be done sequentially could, in fact, be run in parallel — at risk. Normally, you would complete one unit operation, validate it, and then start the next. Instead, we ran multiple streams simultaneously — filtration development, unit operations, validation studies — and then consolidated the data later.
Of course, we can’t expect to do this in normal circumstances — it requires immense resources, time, and global alignment — and the world won’t stop again to serve us in the same way. But it showed what’s possible when the mission is clear and everyone is aligned.
David Brühlmann [00:08:05]:
Now that the pandemic is behind us, we’ve all learned a lot — and the industry has changed. There are new ways of seeing and doing things. From your perspective, what are the lessons we should keep today? And what are the things we still need to implement to make drug development faster, better, and more reliable?
Irina Ramos [00:08:24]:
One of the main lessons is about productivity — and it’s not a question of if, but when we’ll face another pandemic. I hope it’s not during my lifetime, but there will be one. And during COVID, we learned that manufacturing was the bottleneck to getting products out to patients.
We also learned how critical product stability and distribution logistics were — remember the extreme temperature requirements for some vaccines? That’s why they couldn’t reach certain parts of the world where cold chain infrastructure was limited.
So how do we ensure that next time we can do better? There are already some smart, scalable solutions emerging — for example, modular or portable manufacturing units, where you can produce a vaccine in a single-use system or closed device, purify it, and administer it safely, all in one contained setup.
We also learned that we can take calculated risks without compromising product quality or patient safety — provided those risks are well understood and mitigated using tools we already have.
Another big takeaway was that by moving fast, we created templates and new ways of working. Now we know what we can continue doing efficiently — and where we need to scale back, because we no longer have the same level of funding or resources. It’s like we stretched the balloon during the pandemic, and now that balloon gives us more space — more knowledge, more flexibility, more understanding.
We also learned from what didn’t work — but we learned fast. And just as importantly, we saw a real cultural shift with regulators. Many of us no longer view them as “the police.” We now recognize that they play a crucial role and share our desire to see new technologies implemented. They want fewer manufacturing bottlenecks and better process control and capability.
So today, we have a much better two-way communication with regulatory agencies. We leverage industry consortiums and regulatory innovation programs, such as the Emerging Technology Program (ETP) at the FDA. And in Europe and other regions, there are similar initiatives — programs where regulators actively want to learn about your technology and provide feedback early on, even before it’s linked to a specific product. That’s a huge shift in how we collaborate.
David Brühlmann [00:10:40]:
Speaking of learning and adapting fast — we can’t ignore AI. It’s no longer the future; it’s already here. And AI in manufacturing is becoming a central part of bioprocessing conversations. How should teams prepare for AI integration, and what’s the biggest mindset shift they need to make?
Irina Ramos [00:11:04]:
We cannot simply trust the computer. Let me emphasize that again — AI is not about pressing a button and accepting whatever answer it gives.
AI is not entirely new — we’ve been using it for years under different names. Think about machine learning, predictive modeling, or computational process simulation — all of that is AI. It’s just that the buzzword has caught up with the tools.
Now, we need to demystify AI in bioprocessing. Is it a process development tool, or is it something that can take us all the way into GMP manufacturing? That’s a crucial distinction. Is it meant to predict outcomes, or to provide real-time insights? Is it product-specific, or more of a platform-level tool?
At the end of the day, we still need human critical thinking — someone who can connect the math to the physics, the model to the process. Without that, we risk running faster than we can control.
There are already some incredibly advanced labs, both in academia and industry, using a mix of existing, new, and modified AI tools — often connected directly to continuous manufacturing systems.
Imagine this: you’re running a chromatography column for 200 cycles. You already know what those chromatograms should look like from your historical data. When your operator is monitoring cycle 80, the AI model predicts that its profile resembles what cycle 150 used to look like — indicating resin degradation. So the system recommends planning a column replacement in 48 hours. Instead of reacting to a failure, you act proactively.
That’s a simple example, but the same applies to bioreactors, contamination detection, or filtration system monitoring.
Ultimately, the value of AI is defined by the problem you’re trying to solve — or prevent. If it’s a predictive tool, it’s only as good as the people interpreting it. That’s why we need to ensure new scientists coming from universities still understand the fundamentals — why we run things the way we do. That foundational knowledge becomes the input that trains AI tools, and it’s what enables teams to interpret the output in a meaningful, safe, and effective way.
David Brühlmann [00:13:51]:
What are the most important skills a scientist listening today needs to develop?
Irina Ramos [00:13:56]:
They need to truly understand the fundamentals — the scientific and engineering principles behind what we do. We’re still teaching those in schools, and for good reason.
I also teach at the university level, and we’re now dealing with students using tools like ChatGPT, Copilot, and other AI assistants to complete assignments. I strongly recommend: don’t use these tools to pass a class — use them to learn faster and understand better.
Back in my day, we had to spend half an hour in the library just to find a single book. Today, you can use AI to contextualize information and find patterns much faster. That’s the right way to use it — as an accelerator for learning, not a shortcut.
Professors are now reshaping how they teach, so students can use these tools responsibly and enter the workforce at a higher level of understanding.
Remember: you’re building your personal brand from the very beginning. Earlier, I mentioned the importance of trust, competence, and commitment — those apply here too. If your understanding is shallow, even if your grades look great, it won’t take you far. You’ll end up disappointed and frustrated, because your foundation will be weak — and once that’s set, it’s very hard to rebuild.
So, focus on your fundamentals. Find good mentors. Surround yourself with colleagues who are curious and willing to go deep into the principles. This isn’t an easy field — biochemical engineering can be tough, and sometimes it feels overwhelming. But keep at it. Work hard, stay curious, and one day you’ll look back and realize you’re helping change the world — one step at a time.
David Brühlmann [00:15:38]:
Before we wrap up, Irina — what burning question haven’t I asked that you’d like to share with our biotech community?
Irina Ramos [00:15:45]:
That’s a great question. I think we need to understand that the biotech community of today will look very different in the next decade — just as it looks completely different from ten years ago.
We’re always learning, studying, and searching for new solutions to meet the needs of increasingly complex and personalized therapies — like cell and gene therapies tailored to an individual’s DNA or specific condition.
The question I would pose is this: How can we, in manufacturing, anticipate that complexity and use innovation today to solve the problems of tomorrow? And beyond that — how can we bring together the new workforce and the experienced workforce in a truly collaborative way?
We need a “happy marriage” between creativity and experience. Younger scientists should never feel there’s a ceiling limiting their innovation, while more seasoned experts shouldn’t feel threatened by new ideas or technologies. We’re a highly regulated industry, and yes, many of our systems already “work.” But that doesn’t mean we can’t evolve. It’s a continuous effort built on goodwill, mutual respect, and always keeping the patient at the center of what we do.
David Brühlmann [00:17:09]:
This has been great, Irina. What’s the single most important takeaway from our conversation?
Irina Ramos [00:17:15]:
That in continuous manufacturing, one size does not fit all.
If you think it’s too complex, too expensive, or too big of a transformation — start small. Take incremental steps. Get together with your colleagues, run the numbers, and envision the possibilities.
Also, recognize that the workforce of the future will look very different because of AI. Let’s focus on how to get the best out of these tools — using them to enhance, not replace, human expertise.
And above all, stay passionate. This work has to make sense to you personally. You need to be driven by something bigger than yourself. I often remind my teams — when we’re deep in the details of a process or experiment — to zoom out. Ask yourself, Why are we here?
We’re not just in a gray lab doing routine work. We’re part of something transformative. And every so often, someone needs to step back and make sure we’re still focused on what truly matters. I hope that message came through in our conversation today.
David Brühlmann [00:18:19]:
Excellent. Thank you so much, Irina, for joining us and sharing your insights and passion. Where can people find you?
Irina Ramos [00:18:28]:
The easiest way is on LinkedIn — feel free to reach out there.
David Brühlmann [00:18:31]:
Perfect. Smart Biotech Scientists, I’ll include Irina’s link in the show notes. Please do connect with her. And once again, Irina, thank you so much — it’s been a real pleasure.
Irina Ramos [00:18:41]:
Thank you so much, David, for the opportunity.
David Brühlmann [00:18:45]:
There you have it — from continuous processing to AI readiness, Irina Ramos just gave us a masterclass in forward-thinking CMC leadership.
If this episode helped you see your challenges differently, please leave a review on Apple Podcasts or wherever you listen. It really helps other biotech professionals find us. Thank you so much for tuning in today. And remember: science may give you headaches, but biologic drug development shouldn’t. See you next time — and let’s keep smartening up biotech together.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Irina Ramos
Irina Ramos is a downstream bioprocessing expert with more than 15 years of experience advancing biologics from early development through regulatory milestones. She has led teams across process development, scalability, technology transfer, and validation, and has been a key contributor to innovations in platform technologies—especially in continuous manufacturing.
Irina also led the technology transfer of AstraZeneca’s COVID-19 vaccine process to an international manufacturing partner.
For over a decade, she has taught graduate-level biotechnology courses at UMBC. She holds a B.S. in Chemical Engineering from the University of Porto and a Ph.D. in Chemical & Biochemical Engineering from UMBC. She is deeply committed to mentorship and to creating tools that help scientists communicate more effectively.
Connect with Irina Ramos on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
What does it take to build an innovation culture in one of the world’s most careful—and consequential—industries? Biotech is often seen as conservative by necessity, with every process and product touching real lives.
This episode dives straight into the balancing act of pushing boundaries with new technologies while rigorously protecting quality and patient safety—a challenge every scientist, engineer, and leader knows firsthand.
In this episode from the Smart Biotech Scientist Podcast, David Brühlmann meets Irina Ramos, a chemical engineer by training who’s navigated everything from bench research to global regulatory filings, who played a key role in the worldwide rollout of AstraZeneca’s COVID vaccine.
The conservative aspect of our industry is necessary. We shouldn't fight it; we should embrace it. We work with products that have a direct impact on people’s lives, so there’s no way around it. We shouldn’t just fight for a less conservative aspect of the discussions. We should, however, leverage new ways of working and new technologies — maybe automation and maybe digitalization — with the appropriate checks and balances, and following ICH Q guidance, but adapted to the new solutions that will solve these new problems.
David Brühlmann [00:00:39]:
Have you ever wondered how innovation leaders balance risk-taking with regulatory compliance? Or how to navigate the transition from batch to continuous processing without disrupting your existing operations? Welcome to the Smart Biotech Scientist Podcast. Today we are diving deep with Irina Ramos, who is a downstream processing powerhouse and has led CMC programs from bench to regulatory filing. And yes, she helped transfer the AstraZeneca COVID-19 vaccine globally. Irina’s bringing the unfiltered truth about building an innovation culture in conservative environments. So grab your coffee — this episode is packed with plenty of insights.
Welcome, Irina. It’s great to have you on today.
Irina Ramos [00:02:40]:
Thank you, David. Happy to be here.
David Brühlmann [00:02:42]:
It’s a pleasure. Irina, share something that you believe about bioprocess development that most people disagree with.
Irina Ramos [00:02:50]:
I think it’s the vision for the future. Some of us might disagree on when we’ll be ready to accomplish that vision. Some people might disagree that, in the next 10 to 20 years, we will have lights-out manufacturing — which means we wouldn’t need people to walk in, gown up, and perform the tasks we do today. I do believe we can achieve that. I think we can envision a manufacturing facility that is smaller and doesn’t need people inside except to solve problems. And if everything runs at steady state, we don’t need the lights on.
David Brühlmann [00:03:31]:
I love this vision. It’s a great vision — I like it. Before we dive a bit further into today’s topic — we’re going to cover continuous processing, obviously, and a lot more — I’d love to go into your story. Can you please draw us into your journey and share what got you started in biotech, what were some interesting pit stops along the way, and what you’re doing today?
Irina Ramos [00:03:53]:
It’s a series of happy accidents. And I hope students and junior scientists are listening to this because sometimes, when they happen, we wonder if it’s a good thing or a bad thing — but it’s about how we embrace them.
My background is in Chemical Engineering. I’m originally from Portugal, and at the time, the biotech industry was not big there. There was no focus at the university to lead us into that industry. So the happy accident was an exchange student program that opened the door to the United States and exposed me to smaller-scale systems. I could look at cells, I could look at proteins, and I could interact with a microscope. I learned that I could apply chemical engineering principles to very small, microscopic things.
The interesting series of events is that when we enjoy working with the people we encounter in life, that becomes the driver for our choices. For me, it’s really about the people. I absolutely loved working with the professor I met in the U.S., Dr. Theresa Good, who invited me to apply for the PhD program — and 22 years later, I’m still here.
After the program, I got a job in industry at AstraZeneca — at the time, it was MedImmune — and we learned to work in a small-company environment with the resources of a large company. For over a decade, we grew with the AstraZeneca portfolio, and I learned what a chemical engineer could do from bench scale all the way to large scale. It was a great demonstration of how we can apply our background in the biotech industry.
David Brühlmann [00:05:42]:
Yeah, that’s great. I love the way you see it — it’s about people, and I totally agree. Obviously, we’re passionate about science, but ultimately, we work with people. That’s also part of my story — I love working with different, interesting people and learning from them.
So, Irina, you’ve obviously seen a lot of different companies and settings. You’ve led teams in early-stage development all the way through regulatory filings. I’d like to start by looking at that part of your work — what are the biggest mindset shifts you’ve seen successful CMC leaders make when they transition from scientist to innovation leader?
Irina Ramos [00:06:29]:
You are working with individuals who come from very different functions. Each function has its own role, responsibility, and of course, accountability. So in order to bring them all together, we all need to be aware of the goals for that project, for that program, and for the team.
There are two things. First, the goal setting has to be very clear. We all need to agree on the timeline, we all need to understand where the resources come from, and we need to give and take.
The second one is the risks — from the unknowns to the absolutely known and high-risk items. How do we all embrace the same risks? Traditionally, we call it a risk register. A risk register should not be divided into slices per function; it’s really to compile the risks so that the team understands we all embrace the same risks.
So in many ways, the secret — that is not so secret anymore — is: how do we build a team where we all understand, through good communication tools, what it will take to meet that timeline?
So often, you find that the stronger teams are the ones that share resources and knowledge — from leadership approval at the functional level to the overarching governance of the portfolio. So I think the key question is: how do you then build that trust, that we are all together for the same purpose?
David Brühlmann [00:07:52]:
We work in a conservative industry. Obviously, there is a lot of innovation going on, but still, biotech is quite conservative. So how do you build an innovation culture despite these constraints we have in our industry?
Irina Ramos [00:08:07]:
Everything starts with a problem — with a need. If that problem and need are a common denominator, then it’s much easier to convince others. You need a solution; you need to change; you need to seek that solution.
Often, you need to collaborate with vendors or other partners. If you don’t have that common understanding that the problem is real — and it’s no longer just a functional interest to improve — it’s harder to convince other stakeholders, because they don’t want to change. Things are working — why would we change?
It’s easier to innovate when you’re working on novel modalities, because the current platform doesn’t fit anymore. It’s also easier to innovate when you explain to leadership that you need to invest — and that such investment brings resources. That could be budget, people, or space. Maybe you even need a different facility to get that product out the door.
Because let’s be honest — in our industry, it’s the product and the discovery of that product that drive everything else. Manufacturing is a very important piece of the puzzle, but we need to make it work for that product to help patients.
So, new modalities, new constructs — I like to call them these “Frankenstein maps,” right? We no longer have the simple maps. The traditional 20-year-old platform doesn’t always work. So how do you translate that into innovation?
Innovation comes in different shapes and forms. Often, people like to think it’s a shiny piece of equipment that makes things work. Sometimes, innovation is in how we work — maybe we need to change processes to make things faster. We might need to automate documentation and template writing. We might need to innovate in the way we interact with regulatory agencies.
The vision has to be: what is the next platform? What does that look like? And does it apply across the portfolio, or do you have a dedicated platform per modality?
Once you have people on board with your problem, you can bring those stakeholders together to discuss a technology roadmap — and how that plays a role in your product launch. It’s no longer just early-stage work that matters; you have to think all the way to the filing — for a BLA, for instance.
It doesn’t mean you have to have all the answers at the beginning, though. People are often skeptical about what that strategy looks like — there’s so much we don’t know. And that’s okay. What’s not okay is to not think about or not address what we don’t know.
So we need to build that technology roadmap — identifying the milestones when we absolutely need to know the answer, and which functions need to be involved by that time.
At the end of the day, it’s really about human psychology. We want to be heard. We want our function to be represented when it’s supposed to be. So how do we build that alignment and reporting across those milestones for each function?
David Brühlmann [00:11:05]:
You’re making an important point — it’s about the way we think. And this leads me to the next question. What mindsets would you suggest the scientists listening should adopt to thrive in this environment and also to drive innovation?
Irina Ramos [00:11:22]:
We always want to work with competent people. We want to trust them because we believe they know what they’re talking about — from their specific function or area.
We also want to work with positive people — but people who live in reality. So, I’m talking about being good at what you do. And to get there, it’s not only through school — you have to be smart in the way you interact with people, the way you listen to them, and really do so with the intention of the common good — not a personal or individual agenda.
Just because I saw or heard something at a conference or read a really nice paper — how do I translate that idea into solving the problem my organization actually needs solved? Because if they don’t see that need, they won’t support you.
So we need competent people; you build trust, and then you have conversations that build strategy.
The conservative aspect of our industry is necessary. We shouldn’t fight it — we should embrace it. We work with products that have a direct impact on people’s lives, so there’s no way around it. We shouldn’t just fight for a less conservative aspect of the discussions. We should, however, leverage new ways of working and new technologies — maybe automation, maybe digitalization — with the appropriate checks and balances, and following ICH Q guidance, but adapted to the new solutions that will solve these new problems.
I want to emphasize that — and we’ll talk more about AI, I’m sure. We don’t have computers to replace us. We still need competent, valuable scientists and engineers to contextualize how we are going to apply those solutions.
David Brühlmann [00:13:05]:
Whether you’re leading innovation projects, working in CMC development, or preparing an IND or BLA filing, you have to interact with all kinds of stakeholders — especially in bigger companies.
One of our listeners mentioned that coordinating between various stakeholders across different departments and locations is their biggest challenge when implementing innovative solutions. But I think that applies to all kinds of settings.
I’d love to get your perspective, Irina. From your experience, what strategies have you found most effective for communication, collaboration, and ultimately achieving the results you’re aiming for?
Irina Ramos [00:13:52]:
I’ve had experiences like that too — absolutely. From time zones to geography to culture — even the way people think methodically about something, or how they go about giving updates and explaining what they’re doing.
I found different ways, depending on my colleagues. Sometimes, you might want to leverage a one-on-one. Maybe you have a one-on-one every other week with someone who has very limited English — think about that. In that more focused environment, you’re supporting that colleague.
And if you’re leading a team, you’re not alone anymore — that colleague is not alone anymore. You’re representing and filling the gaps as that colleague communicates to the rest of the team, if necessary.
You need strong project management skills in a team. That means when you present an agenda, sometimes it’s not enough — especially for teams from different cultures — to just have a bullet list of what we’re going to discuss. Maybe you need to be clear about expectations for those topics.
So I actually did that — I wrote not only what updates we needed from each function, but also the expectation: not only the timing, but the linkage of that topic to other functions. So now they come more prepared. The action items are very clear, and so are the timelines.
So: clear expectations, organized agendas, and meeting minutes that truly reflect what was discussed.
But we also need to be prepared for ad hoc discussions. In a recent example, I found that if we’re a bit more senior in the organization and have seen good ways of doing things, we are responsible for coaching. You might say, “Have you thought about this?” — and you need to build trust for your ideas to be accepted. Maybe you just ask them to trust you: “Let’s try it this way; let’s see if it works.”
It’s complex — and it applies to project teams, and to innovation as well. Sometimes innovation brings an extra layer of complexity: do they have the capability at that site? Do they have what it takes — from space, to bandwidth, to people, to know-how?
And we shouldn’t assume. Don’t assume anything. You should ask. Be curious about what those individuals or other teams do in their business-as-usual environment. So when you need to push them — to accelerate them, to think outside the box — you already understand what kind of tools you need to use to stretch a little bit more.
David Brühlmann [00:16:34]:
What I’m hearing comes down to clear communication, number two — trying to understand what their needs are, who they are — and then also having clear expectations, and making sure that things happen according to the agreed timeline.
Now, let’s shift our conversation to a topic that’s dear to your heart. Let’s talk about continuous manufacturing. For those who are still in the fed-batch world, let’s start out in a slightly controversial way: what is one misconception about continuous manufacturing that you hear most often — and how do you address it?
Irina Ramos [00:17:09]:
Too complex. Too expensive to implement. It’s going to take longer timelines to develop a continuous process — even before you scale it up. A lot of uncertainty, right? A lot of ambiguity.
It’s all valid. All of these concerns are valid — and they actually matter. And why do I call them misconceptions? Because it’s really company by company, facility by facility, and situation by situation.
Continuous manufacturing is not the solution for all products. Nor is there a single, one-size-fits-all solution for continuous processing. That’s why the community is growing — vendors are providing different solutions. And we don’t always call it “continuous.” Sometimes we call it intensified, or even automated manufacturing.
So how do we make sure that, for our portfolio and our existing facilities — both internal and external — we’re making the right choices? Because if you have to work with CMOs (Contract Manufacturing Organizations), you need to understand their ability to adapt to your products.
Then you need to make a decision: is it worth it? Can we have a forecast that we can actually trust? Often, these forecasts — for example, blockbuster forecasts — are not really met. They just give us an indication. But can we expand and apply that to the capacity we will need for that product?
So it all comes down to productivity for us in the bioprocess world. My background is downstream, but if my upstream colleagues don’t develop a highly productive upstream process in the bioreactor, downstream intensification doesn’t make sense.
You can think about traditional perfusion, steady-state perfusion, or dynamic perfusion. You can think about a couple of weeks’ duration or longer — duration doesn’t really matter. Productivity is the measure — how much mass per unit time per unit volume you’re able to produce. How much do you actually need?
Only after that does downstream make sense to intensify — to make it continuous. Even though some intensification tools in downstream could also be applied to fed-batch.
If we find a way to have a flexible facility — more like pieces of a puzzle — we can identify a stage-wise approach to implementation that makes sense for the product. For example, if you’ve aligned your upstream feeding strategy to achieve really high productivity — 5x, 10x improvement — then it makes sense to start thinking about downstream integration.
Where I’d like to highlight is the PAT part — Process Analytical Technology — all the analytical tools, sensors, and real-time or near real-time tools. If they’re being developed to facilitate or even enable continuous processing, why wouldn’t we use them in a fed-batch process?
So I actually think that, from an analytical perspective, we might have an even stronger push — from our customers — to get these technologies out the door. And I know the regulators want that, because at the end of the day, they want us to control our processes better.
So how can we entertain this vision — of what’s out there — and then make our own internal strategy that fits our portfolio needs for the next five to ten years?
The worst thing that can happen is if we set a strategy and keep it at steady state — “This works for now” — and no one is really thinking about the next three, five, or ten years in terms of portfolio evolution.
And then leadership comes with these very intriguing molecules — that may not be very productive, or that degrade very fast. And that’s where continuous can actually be your solution.
So I talked about productivity, and now I’m talking about product quality. Product quality could be another driver to implement continuous manufacturing — because then your intermediate is not sitting around for too long.
David Brühlmann [00:21:00]:
You’ve just come back from the Integrated Continuous Biomanufacturing (ICB) conference in Dubrovnik — I saw it on LinkedIn. What is the state of continuous manufacturing today like? What’s hot in this area?
Irina Ramos [00:21:12]:
The most interesting thing about the conference was that we focused on having more continents represented — leveraging the location in Europe. And we did that.
We also wanted to bring new people and institutions on board, so we offered pre-conference tutorials with three big names in this field who decided not to retire — and we’re very thankful for that. They’re still around, and they’re still incredibly valuable in providing context and mentorship.
So now, David, you can imagine — you have this wave of new people coming in, interested and curious. You have more organizations represented, presenting posters and oral talks. And at the end of the day, what you get is more diverse solutions to intensify and make the bioprocess continuous. That, in turn, enables many more “what if” conversations.
So, what’s hot right now? Well, we’re being bombarded — in a positive way — with new modalities and constructs that no longer fit the traditional platform. You need to find automated and smart ways to move the needle — to get closer to utilizing predictive models that reflect what’s happening in the process.
These models are predictive not only of real-time activities, but also of scalability. There are many technologies used in other industries that vendors are now bringing into bioprocessing — a field that’s admittedly late to the game in some respects — but we also bring unique challenges: closed systems, single-use components that need to last, and stringent GMP expectations.
A very important topic is how to leverage expectations around documentation, data gathering, and monitoring — and do more real or near real-time analysis. Some people are skeptical, others are encouraged by the idea of real-time release.
So how do we bring all these ideas together? Eventually, they converge into what we’re all striving for: faster processing, higher productivity, smaller facilities, single-use or hybrid setups (stainless steel combined with single-use). Ultimately, your PAT framework supports better process control and drives digitalization across the entire manufacturing process.
David Brühlmann [00:23:28]:
How do you decide whether to go for a hybrid approach or a fully end-to-end continuous process? Because as you said, there are so many options now — and it can be quite overwhelming. Are there some simple guiding principles to help scientists choose the best approach?
Irina Ramos [00:23:46]:
I think there are. A stage-wise approach is always better — even if your final goal is fully end-to-end continuous.
Unless you’re building from scratch — a greenfield or even a brownfield facility — it’s important to leverage what you already have. But you also need a portfolio that can feed into that long-term plan, including scalability considerations.
Maybe you start with clinical manufacturing, and then, if those products are successful, they become the ones feeding your commercial facility. So you need that vision — that roadmap.
A hybrid approach, especially while vendors are still maturing their solutions, is extremely important. And we shouldn’t only rely on the big vendors — we should also engage with smaller companies that bring innovative solutions and prevent the industry from becoming monopolized. Competition is healthy in this field.
In the end, it’s really about what fits your organization — if you have a clear vision of the kind of portfolio you’re going to focusing on.
David Brühlmann [00:24:49]:
To what extent do scientists need to prioritize the control strategy or real-time monitoring? That seems crucial for continuous manufacturing. Are there some easy solutions to start with?
Irina Ramos [00:25:01]:
Think about what you already do in fed-batch — how do you control the process? Look for parallels.
Maybe you don’t need everything under the sun — and that helps demystify the perceived complexity of these implementations. Just because something would be nice to have doesn’t mean it’s a must-have to control the process effectively today.
So, leave the nice-to-haves for later. Focus first on the must-haves, which are linked to regulatory expectations.
Look at what you already do in fed-batch — identify what you absolutely need to continue doing for continuous — and then maybe add sensors that provide feedback loop control.
For example, you might monitor concentrations or pH. If you’re titrating inline, you’ll want that sensor connected to a simple feedback loop. In fed-batch, you might not need that as much.
So, you’re not reinventing the wheel — these tools already exist. You’re just transferring that knowledge to equipment that’s now closed, enabling closed-system operations. There are many examples like that — where you can leverage existing sensors and technologies to provide the necessary control and insight.
David Brühlmann [00:26:11]:
That’s it for part one. We’ve explored leadership mindset shifts, innovation, culture building, and the advantages of continuous manufacturing.
If these insights sparked something for you, please leave a review on Apple Podcasts or your favorite platform — it helps scientists like you discover these conversations.
Stay tuned for part two, where Irina reveals what the COVID vaccine taught us about what’s truly essential in process development — plus her take on AI in biomanufacturing.
See you next time.
All right, smart scientists — that’s all for today on the Smart Biotech Scientist Podcast.
Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite podcast platform. By doing so, you’ll help empower more scientists like you.
For additional bioprocessing tips, visit us at www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech insights in our next episode. Until then — let’s continue to smarten up biotech!
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Irina Ramos
Irina Ramos has developed a career in downstream process development of Biologics, from the bench to team's leadership. Her work was applied directly in portfolio projects, leading CMC projects to regulatory filing milestones, and in innovating and implementing platform technologies, in particular related to continuous manufacturing.
In 15+ years of experience in downstream process development, Irina has worked in Early and Late stage process development, process scalability, CMC leadership, technology transfer and process validation. She led the technology transfer of the AstraZeneca COVID-19 vaccine process to an international partner.
Connect with Irina Ramos on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
Imagine unlocking a biological sample and, instead of peering under the usual “streetlight” of targeted analysis, being able to illuminate the entire landscape of metabolites, glycans, and unknown impurities.
This episode is all about breaking the limits of current bioprocess analytics, featuring a practical journey into cryogenic infrared ion spectroscopy—technology designed to reveal what’s been invisible until now.
In this episode from the Smart Biotech Scientist Podcast, David Brühlmann meets Tom Rizzo, Professor Emeritus at EPFL and Co-Founder and CSO at ISOSPEC Analytics, a life science company that aims at simplifying molecular identification.
The streetlight effect goes as follows: you lose your keys in the dark, and the only place you look for them is under the streetlamp, because that’s the only place you can see. Now, you don’t know where you lost your keys, but you only look under the streetlamp because it’s dark everywhere else.
Well, you know, in the field of metabolomics, for example, one can identify only a small fraction of the total available metabolites. So what do you do? You do a targeted analysis — you look for certain metabolites, the ones that you can see. But there’s this whole 90% of them, perhaps, that you can’t see.
And so, by using this technology — where we can identify the large majority of metabolites, for example, and we believe that we can — it opens up a whole new world of investigation. Because you can look anywhere for your solution, and not only under the streetlamp.
David Brühlmann [00:00:53]:
Welcome to The Smart Biotech Scientist. I’m David Brühlmann, and this is part two of our conversation with Tom Rizzo, who is a Professor Emeritus at EPFL and now Chief Scientific Officer at ISOSPEC Analytics.
Last time, we covered the science behind cryogenic infrared ion spectroscopy, and today we’re getting practical — how this technology helps you discover biomarkers, identify mysterious degradation products, and accelerate process development.
We’ll also explore Tom’s transition from academia to entrepreneurship and what it takes to commercialize breakthrough technology. If you’re dealing with complex mixtures and unknown impurities — stay tuned. This is for you.
This new dimension — what will it enable scientists, companies, pharmaceutical organizations, or even biomarker researchers to do? Can we detect new molecules we weren’t able to detect before? Or will the workflow be faster?
I’ve done a lot of omics work in my career, which can be very complex. I imagine your technology could simplify that too.
Tom Rizzo [00:03:17]:
Okay, well, there are several parts to my answer on this one.
In the area of glycans, for example, there’s a real difficulty in analyzing different isomers. And we know — and there’s evidence in the scientific literature — that, for example, the glycosylation of a monoclonal antibody affects its efficacy, safety, and lifetime.
But if you can’t resolve the different isomers, you don’t know what’s responsible for that difference in efficacy. So being able to do isomer-specific glycan analysis sheds new light on the mechanisms or effects of glycosylation on efficacy.
That’s one side of things — being able to determine, down to the isomeric level, what species are adorning your protein is an important part of understanding its function and why it functions so well.
But there’s also a broader issue, particularly when you talk about different omics. Metabolomics, for example. And I like to use the analogy of the streetlight effect. Do you know what the streetlight effect is? Have you heard of this before?
David Brühlmann [00:04:18]:
No, I don’t know.
Tom Rizzo [00:04:19]:
Okay, so the streetlight effect goes as follows: you lose your keys in the dark, right? And the only place you look for them is under the streetlamp, because that’s the only place you can see. Now, you don’t know where you lost your keys, but you only look under the streetlamp because it’s dark everywhere else.
Well, you know, in the field of metabolomics, for example, one can identify only a small fraction of the total available metabolites. So what do you do? You do a targeted analysis — you look for certain metabolites, the ones that you can see. But there’s this whole 90% of them, perhaps, that you can’t see.
And so, by using this technology — where we can identify the large majority of metabolites, for example, and we believe that we can — it opens up a whole new world of investigation. Because you can look anywhere for your solution, and not only under the streetlamp.
So I think that’s an apt analogy to the situation, because current techniques can only identify such a small fraction of biological molecules — metabolites in particular.
David Brühlmann [00:05:21]:
Yeah, and I think it will open a lot of avenues in disease detection, because you’re making such a good point. If you go to the doctor, they have their program, and we’ve been measuring the same biomarkers for many years.
But now, I think with this expanded capability, we’ll be able to pick up a lot more metabolites. And on top of that, with AI, I imagine the possibilities will be endless.
So I just have a follow-up question about where you think AI will take this. And also, I think another very practical aspect is that with omics or with mass spectrometry, you usually need huge libraries to correctly identify your molecule. How does that work with your technology?
Tom Rizzo [00:06:05]:
You’re right. If you want to identify a molecule from its infrared spectrum, you need to have a library of infrared spectra — and, in fact, we have mechanisms for that.
Normally, you would think that you need a standard, and getting standards for different isomeric molecules can be extremely difficult, if not impossible. So this could potentially be a problem.
But we’ve developed techniques to interpret infrared spectra of molecules without having a standard. Here’s how it works: let me take the example of glycans, because that’s one we’ve done a lot of work on. You measure the infrared spectrum of an apparent molecule, and you find that it’s not in your database of spectra. Well, what do you do?
What we do then is fragment the molecule. We can measure the infrared spectra of the different fragments. Usually, you can find fragments that are characteristic of one isomer or another. So if you’re looking for a substitution on one branch or another branch, you break it off and then you see just that branch — not the rest of the molecule — and you can determine which species is on there.
Then, for that smaller molecule, you ask: is its infrared spectrum in our database? If it is, then you’ve identified it. And from that, you can say, “Okay, now we’ve identified the parent molecule, because we know that substitution was on this branch.”
If, after the first fragmentation, you still can’t identify the molecule, you can fragment again and measure the infrared spectrum of that smaller piece. You can continue doing that until you reach small enough species that are in our database.
Once you’ve identified the infrared spectrum of the parent molecule, you don’t have to repeat this fragmentation process — you just do it once. It’s now in your database. If, later, you have a larger molecule that fragments back to that known one, you only need to go as far as finding that fragment in the database.
So, we have a mechanism by which we grow the database from smaller species to larger species — by taking large molecules, breaking them down until we reach fragments for which we already have data. That’s the general procedure for analyzing large, more complex molecules.
Now, in the case of smaller molecules, like metabolites, we rely to some degree on computed spectra — quantum chemistry calculations. For smaller species, these calculations are actually quite accurate. That’s what helps us on the smaller end of the spectrum. And there are now AI-based techniques that enhance quantum calculations by matching them with measured infrared spectra.
So that’s one way we’ll be using AI — in combination with quantum chemistry calculations to improve the matching of computed and experimental infrared spectra.
But we also use AI in other ways. As you asked — once we get all this information about metabolites, what do we do with it? How do we use those measurements to increase our understanding of disease mechanisms?
There, we can use AI together with all this analytical information — alongside more traditional omics data — to integrate everything and help medical researchers better understand mechanisms of disease. We really think AI will help bridge the gap between analytical measurements and the understanding of disease mechanisms.
David Brühlmann [00:09:23]:
Yeah, that’s where I see both the opportunity and the challenge. I’ve had several conversations with my former boss about omics, and the question was always the same: It’s great — we’re able to generate tons of data — but it’s very difficult to draw meaningful conclusions from it.
What does it actually mean? For instance, at that time we were optimizing bioprocesses — we could see that there were changes, but what did those changes really mean? What should we change to actually make the process better?
So I think AI will definitely help us make much better decisions. My question now is: what is your vision for the technology? And as you’re combining this with AI and other technologies, where do you see this going in the next few years?
Tom Rizzo [00:10:09]:
We see it going in a couple of different directions.
One is that we’re establishing a platform that we can provide as a service — a platform to help medical researchers and pharmaceutical companies analyze biological samples with the goal of learning more about disease and understanding disease mechanisms.
So on one hand, we provide a service: people can send us samples, and we can not only give them data — not just an Excel sheet with “what’s in their sample” — but, using AI, we can also help them interpret those results. We can give them the tools to start analyzing and uncovering disease mechanisms. In other words, we would provide not just data, but a data and analytics platform equipped with AI tools to help interpret and visualize the findings.
On the other hand, we also want to put these tools directly into the hands of researchers. As I mentioned before, we’re working together with Agilent to produce a commercial instrument, allowing people to access this technology themselves and apply it in ways we might not even imagine. So we see it going in that direction.
Additionally, we have the ability to produce databases of infrared spectra using our current instruments. That means we can also offer a specialized service — developing custom spectral databases in different domains, because we have the expertise and infrastructure to do so.
David Brühlmann [00:11:28]:
When do you think this device will be available? Are we talking months, years — what’s your current timeline?
Tom Rizzo [00:11:38]:
So, we currently have a service that—
David Brühlmann [00:11:40]:
—you already offer as a service, yes.
Tom Rizzo [00:11:42]:
Exactly. We’re already working with hospitals, and we have collaborations here in Switzerland, Germany, and Belgium. But for people to actually get their hands on the instruments, it usually takes a couple of years to go from a prototype to a commercial product. An add-on module to existing instruments might become available relatively soon, but for a fully integrated instrument, I’d say we’re still a couple of years away.
David Brühlmann [00:12:05]:
Now I’d like to bridge your academic career with your current entrepreneurial career. A lot of our listeners are either in academia or they’re startup founders, so I’d love to get your take on this. You’ve seen both worlds — and the transition between them — and you’ve worked with many scientists throughout your career at EPFL.
What advice would you give to a scientist, a PhD student, or a postdoc who would now like to start his or her own company?
Tom Rizzo [00:12:37]:
As I said before, it’s a completely different world, right? And there’s a lot to learn — I’m still learning quite a bit. You really have to be passionate about what you’re doing. It’s not for the faint-hearted. Many startups just don’t make it, and one has to go in realizing that.
But from a very practical standpoint — something we’ve experienced personally — we only had about one year of overlap between my academic laboratory and the company, because of my retirement. Once you’re out on your own and no longer have that academic lab, you lose the ability to do R&D in the same way you did before.
That’s a real drawback. R&D in a startup is tough, because investors want to see how you can make money. Just doing R&D to improve your technique doesn’t generate income. If a professor spins off a startup while continuing to run an academic lab, and can do more fundamental R&D there while the company focuses on commercialization — that’s a huge advantage. Because the weight of doing R&D within a startup can be heavy.
There are also competing interests — someone might want to use the machine to understand the technology better, but at the same time, you have to deliver results for your customers. So that’s one very practical piece of advice I can offer.
David Brühlmann [00:13:52]:
Excellent. Tom, at the beginning of our conversation, you talked about legacy — about the purpose behind this new season as an entrepreneur. I’m really curious about that. You decided to start this new adventure with the company — what is giving you your sense of purpose now? And what legacy do you hope to build through this entrepreneurial chapter?
Tom Rizzo [00:14:11]:
As I look at things now, what drives me to some degree is life experience. You know, my father died of colon cancer. I’ve had friends with prostate cancer. My wife had leukemia. As you go through your career and get older, you see people — friends, colleagues — who suffer from these diseases, many of which, if diagnosed early enough, could have been treated effectively.
Fortunately, my wife’s leukemia was treated with a stem cell transplant, which was completely successful. She’s 100% in remission, and we’re deeply grateful for that. So the whole idea of early diagnostics is something that drives me. Can we find biomarkers that allow us to detect disease years before it develops?
If our technology helps in the early diagnosis of even one of these diseases, I would consider that a tremendous success. That would be a legacy worth leaving.
David Brühlmann [00:15:09]:
Wow. I love that. Very powerful. This has been great, Tom. Before we wrap up, what burning question haven’t I asked — something you’re eager to share with our biotech community?
Tom Rizzo [00:15:21]:
I think you’ve covered it pretty well, David. Offhand, I can’t think of any burning question you haven’t asked.
Actually, maybe not a burning question — but a technical one that you didn’t ask: how do we actually measure the infrared spectrum?
David Brühlmann [00:15:38]:
Oh yes, good point — tell me!
Tom Rizzo [00:15:41]:
Now, if you think about it — how can you make a technique that sensitive? Let’s consider the physics for a moment. You have a light source — maybe it emits 10¹⁷ photons per second. Let’s say you have a thousand molecules in your sample. If you try to detect absorption directly, you’re comparing 10¹⁷ photons to a change of only a thousand. That difference — 10¹⁷ minus 10³ — is effectively still 10¹⁷. So, you can’t detect the absorption of just a few molecules that way.
So how do we measure the infrared spectrum of samples inside a mass spectrometer when there are so few ions — sometimes as few as a thousand? The answer is: we use a special technique called messenger tagging. We attach a weakly bound tag — typically nitrogen (N₂) — to the molecule, because nitrogen binds weakly to ions. We then look at the mass of the molecule plus this tag.
When the molecule absorbs infrared light (at a specific vibrational frequency), that energy redistributes and causes the messenger tag to detach — or “pop off” — the molecule. We can detect that change in mass: the molecule without the nitrogen tag is lighter by 28 daltons. By measuring the ratio of tagged to untagged ions as a function of laser frequency, we can build an infrared spectrum. That’s how we achieve such high sensitivity — we’re not directly measuring photon absorption, but rather a consequence of it, which we can detect with exquisite precision. So, maybe not a burning question — but definitely one worth asking!
David Brühlmann [00:17:42]:
Wow, that’s actually really important — and fascinating! I’m glad you explained that. So, as we wrap up, Tom — what’s the most important takeaway you want our listeners to walk away with?
Tom Rizzo [00:17:55]:
I think the most important takeaway is that soon, you will no longer be restricted to analyzing only a small fraction of molecules in a biological sample.
By adding this new data dimension, we’ll be able to gain a much more comprehensive view of all the chemical changes occurring — for example, when a drug is metabolized, we’ll be able to observe all the metabolites, not just a selected few. This extra data dimension will fundamentally change how we do drug development and disease diagnostics.
David Brühlmann [00:18:33]:
Fantastic. Tom, where can people connect with you, learn more about your company, or explore your services — and maybe even get their hands on your instrument once it’s available?
Tom Rizzo [00:18:46]:
We have a website: www.isospecanalytics.com, or you can send me an email at tom@isospec.ch — that’s the simplest way to reach me.
We’re also on LinkedIn — just search for ISOSPEC Analytics. Through our website or LinkedIn, you’ll find updates and announcements — for example, when new instruments become available. We already announced our collaboration with Agilent, and we hope to share more about our next steps soon.
David Brühlmann [00:19:14]:
Well, thank you so much, Tom, for sharing your passion and purpose — and for explaining how you’re transforming the analytical space. It’s been a huge pleasure talking to you today and having you on the show.
Tom Rizzo [00:19:29]:
Well, it’s been my pleasure, David. I look forward to listening to more of your podcasts as well.
David Brühlmann [00:19:34]:
Thank you for joining me for this two-part conversation with Tom Rizzo. I hope you’re walking away with fresh insights on how advanced analytical techniques can solve real problems in bioprocess development and therapeutic characterization.
If this episode sparked ideas or answered questions you’ve been wrestling with, I’d love to hear about it. Please leave us a review on Apple Podcasts or your favorite platform.
Until next time — keep doing biotech the smart way.
Alright, smart scientists — that’s all for today on The Smart Biotech Scientist Podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery. If you enjoyed this episode, please leave a review on your favorite podcast platform — it helps us reach and empower more scientists like you. For additional bioprocessing insights, visit www.bruehlmann-consulting.com. Stay tuned for more inspiring biotech discussions in our next episode. Until then — let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Tom Rizzo
Tom Rizzo received his PhD in Physical Chemistry from the University of Wisconsin–Madison in 1983, following his undergraduate studies at Rensselaer Polytechnic Institute. After postdoctoral research at the University of Chicago, he joined the University of Rochester before moving to the École Polytechnique Fédérale de Lausanne (EPFL), where he became Professor of Chemistry and later served as Dean of the School of Basic Sciences.
His work focuses on integrating laser spectroscopy, ion mobility, and mass spectrometry to advance biomolecular analysis. Upon retiring from EPFL in 2023, he assumed the role of Chief Scientific Officer at ISOSPEC Analytics, a company applying his research to biomarker discovery and molecular diagnostics. His achievements have been recognized with the Bourke Award, the Ron Hites Award, and an ERC Advanced Grant, and he is a Fellow of both the American Physical Society (APS) and the American Association for the Advancement of Science (AAAS).
Connect with Tom Rizzo on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
The identification and characterization of biological molecules is central to the biotech industry, particularly in drug development, bioprocess optimization, and advanced analytics. Yet, despite revolutionary advances, many scientists still struggle with the limitations of traditional tools like mass spectrometry (MS) and nuclear magnetic resonance (NMR).
In this episode from the Smart Biotech Scientist Podcast, David Brühlmann meets Tom Rizzo, Professor Emeritus at EPFL and Co-Founder and CSO at ISOSPEC Analytics, a start-up that spun off from his laboratory, where they aim at simplifying molecular identification.
Mass spectrometry is an incredibly sensitive tool. NMR is a fantastic method for analyzing and identifying molecules, but you need a lot of material to acquire an NMR spectrum. A mass spectrometer, on the other hand, detects ions—and because you can detect ions, it’s incredibly sensitive.
You can essentially detect individual ions by doing spectroscopy inside a mass spectrometer. We were able to measure well-resolved infrared spectra with the sensitivity of mass spectrometry. And that’s what makes it really unique.
David Brühlmann [00:00:34]:
Welcome to The Smart Biotech Scientist. I’m your host, David Brühlmann. Today’s guest is Professor Tom Rizzo, former Dean of the School of Basic Sciences at EPFL in Lausanne, Switzerland, and now Chief Scientific Officer at ISOSPEC Analytics.
We’re diving into cryogenic infrared ion spectroscopy—a game-changing analytical technique that could revolutionize how you identify unknowns, characterize glycans, and solve those frustrating structural puzzles in your bioprocess development.
If you’ve ever struggled with molecular identification using traditional mass spec alone, this conversation will open your eyes to what’s possible today. Let’s get started.
Welcome, Tom—it’s great to have you on today.
Tom Rizzo [00:02:35]:
Thanks, David, for having me. I’ve listened to a few of your podcasts and really enjoyed them, so it’s an honor to be here.
David Brühlmann [00:02:42]:
The pleasure’s mine, Tom. To start, share something you believe about biomolecular analysis that most people would disagree with.
Tom Rizzo [00:02:51]:
I don’t like to be too controversial, but I’d say there hasn’t been a fundamentally new commercially available technology for the analysis of biological molecules in the past 10 years—probably not since the introduction of ion mobility into mass spectrometry.
David Brühlmann [00:03:08]:
All right—now we’re getting into the nitty-gritty of analytics! I’m excited to unpack that. But first, I’d love to hear your story, Tom—what got you started in science, what sparked your passion for chemistry, and what were some pivotal moments throughout your long and distinguished academic career?
Tom Rizzo [00:03:31]:
This will date me a little bit, David, but I grew up in the ’60s, and I grew up in New York. In 1964–65, there was a New York World’s Fair not too far from where we lived. My parents would take us almost every other weekend to this World’s Fair. All of the big technology companies had buildings with presentations and demonstrations—there was IBM, there was Bell Labs, there was DuPont—and I was just fascinated by it all.
I remember very, very distinctly the DuPont exhibit, where they did chemistry demonstrations. I was fascinated by this. In fact, many years later, I performed those same demonstrations in front of my chemistry class at university, and even in my kids’ elementary school classes. That really sparked my interest in science.
I knew I wanted to be a scientist, but I wasn’t sure whether it would be chemistry or not. In fact, I thought I would be a physicist—I thought my mathematics background was pretty good—so I was planning to go into physics. But I took an Advanced Placement course in chemistry in my last year of high school. Part of the course involved being given small vials with unknown samples, and we had to determine what was in them. This was really the start of my interest in analytical chemistry, if you will.
The way we were graded was that the teacher would give you a piece of paper with tabs on it, and each tab had a different answer. If you pulled the first tab and got it right, you received the highest grade. If it took two pulls, your grade went successively lower. I set up a little laboratory in my room at home to run these chemical tests. That really grabbed and sparked my interest in chemistry.
So I decided to study chemistry at university, but I was oriented more toward the physical side of chemistry. When I went on for a PhD, I pursued physical chemistry, but I also did a minor in physics. I’ve always kept that interest in physics. But deep down, my interest in analytical chemistry was there from the early days.
When I started my PhD, I was doing something very “physics-y,” in a sense. I was really looking at molecules by how they interacted with light. The whole field of spectroscopy fascinated me. I really liked the tools of physics. I thought lasers were cool—that’s what I pursued.
By the time I finished my PhD, I asked myself: how large a molecule could we get interesting information from using spectroscopy? This led me to a postdoc at the University of Chicago with Don Levy, a distinguished professor who developed techniques to put large molecules into the gas phase and measure their spectra. We measured the first spectrum of an isolated amino acid in the gas phase at low temperatures, and that was kind of my start.
When I took my first faculty position at the University of Rochester, my goal was to combine mass spectrometry and laser spectroscopy as a new way to study biological molecules. But I had no reputation in mass spectrometry at all. I was trying to get my grants funded, and no one wanted to fund this kind of work. I got close at the National Institutes of Health and close at the National Science Foundation, but no one would fund it.
I had to step back, since I needed to get tenure as an assistant professor. I returned to studying the physics of smaller molecules and managed to do well enough to earn tenure. Shortly afterward, I was approached about taking a position at the École Polytechnique Fédérale de Lausanne (EPFL). I applied, got the position, and we packed up our family and moved to Switzerland. There, I continued my work looking at the chemical physics of laser-excited molecules.
But I always had in the back of my mind the experiments I wanted to do as an assistant professor—combining laser spectroscopy and mass spectrometry—I just never had the funding.
An opportunity came at EPFL, where I had spent two years as a department head. I said I would never do that job again—it was really uninteresting. But the president of the institution changed and asked me to take over as department head in chemistry again. I agreed, on one condition: that I would finally be able to do the experiment using laser spectroscopy and mass spectrometry, which I had never been able to fund.
This was in the year 2000. I told him, if you give me the money to build this machine, I’ll serve another term as department head. He agreed. I wrote up a project and obtained 400,000 Swiss francs to build a new machine.
That was the beginning of the end for me, in the sense that ever since, I’ve been involved in the spectroscopy of biological molecules in combination with mass spectrometry.
David Brühlmann [00:08:26]:
Wow, what a fascinating story—and I love your persistence throughout your journey. And just a little fun fact: Tom was actually my professor at EPFL! I’m really happy to reconnect after all these years.
I still remember your physical chemistry class, which I really enjoyed. There’s one thing that has stayed with me to this day—the “umbrella flip” of the ammonia molecule in quantum mechanics. That was such a great class! Hard to believe it’s been about 20 years—I still remember that example vividly.
Now, throughout your career—you recently wrapped up your time at EPFL. What led you to become the Chief Scientific Officer of ISOSPEC Analytics instead of enjoying a well-deserved retirement? How did this transition come about?
Tom Rizzo [00:09:17]:
Good question. As you approach retirement, you start looking back on your career and asking, What has it all been good for?
Most of my work over the years has been curiosity-driven research. I was fascinated by what we could learn about large molecules through spectroscopy at low temperatures. Early on, I wasn’t really thinking about practical applications or commercial products. But toward the end of my academic career, I began reflecting on my scientific legacy.
I developed a real passion for seeing the techniques we’d developed in the lab make their way into practical use—to become part of a commercial instrument that could help people analyze biomolecules or even diagnose disease. That became very motivating for me.
I had a couple of postdocs in my group who shared that same passion. So, about a year before my formal retirement from EPFL, we founded ISOSPEC Analytics, and I took on the role of Chief Scientific Officer. When I officially retired a year later, I maintained my affiliation with ISOSPEC and left EPFL.
Honestly, it turned out to be a great way to retire. When you leave an institution after 30 years, from one day to the next you go from being “somebody” to being “nobody,” right? But at the company, I felt my expertise and experience were still highly valued. It was a wonderful transition from academia to industry.
And the world of startups is completely different from academia—there’s so much new to learn, and I found that very stimulating.
David Brühlmann [00:10:48]:
That’s inspiring, Tom. So let’s unpack the technology itself and dive into the details of what you’re developing. You’re working on a novel infrared spectroscopy method—but infrared spectroscopy has been a cornerstone analytical technique for decades.
Can you explain the fundamental breakthroughs you and your team have achieved? How is your approach different from traditional IR methods?
Tom Rizzo [00:11:14]:
There are several key differences. You’re right—infrared spectroscopy has been around for decades and has been a cornerstone of analytical chemistry. But traditional infrared spectroscopy, especially in the condensed phase, has some major limitations.
In the gas phase, infrared spectroscopy can be extremely detailed and precise because molecules are isolated. But for biological molecules—typically studied in solution or in complex matrices—the spectra are often broadened and distorted by environmental interactions. The vibrational modes are influenced by their surroundings, so the spectral bands become broad and less uniquely characteristic of a specific molecule.
If you can instead bring large molecules—like peptides or sugars—into the gas phase and remove them from that solution-phase environment, the spectra become much simpler. The lines sharpen significantly.
Then, if you cool those gas-phase molecules to cryogenic temperatures, the spectra become even simpler and sharper, because thermal motion is minimized.
Cryogenic spectroscopy can also be done in rare-gas matrices, where you can achieve very sharp spectra—but that’s a labor-intensive process and not practical for analytical applications. So our goal was to make this simplification accessible in a practical, analytical way: by cooling isolated ions to cryogenic temperatures in the gas phase.
And perhaps the most important innovation is that we can measure infrared spectra with the sensitivity of mass spectrometry.
Mass spectrometry is incredibly sensitive. NMR is also a fantastic tool for analyzing and identifying molecules, but it requires much larger sample amounts. In contrast, mass spectrometry can detect individual ions.
By doing spectroscopy inside a mass spectrometer, we can essentially perform spectroscopy on single ions—and obtain well-resolved infrared spectra with the unmatched sensitivity of mass spectrometry. That’s what makes our approach truly unique.
David Brühlmann [00:13:33]:
Just to make sure I fully understand—your method—is it using mass spectrometry, or is it something separate?
Tom Rizzo [00:13:41]:
It’s actually performed inside a mass spectrometer. We have a cryogenic cell integrated into the instrument. You can separate ions using liquid chromatography, ion mobility, or any technique that couples to mass spectrometry. Once the ions are separated, we send them into the cryogenic cell and measure their infrared spectra. The molecules are mass-selected—they could also be mobility-selected or LC-selected—but it’s all done inside the mass spectrometer.
David Brühlmann [00:14:12]:
Ah, I see. Could you recap the main advantages over traditional MS-based methods? Are the differences at the sample, matrix, or quantity level?
MS/MS is used very widely for the identification of molecules, and it’s a very effective technique. However, it has some drawbacks. For example, when you have different isomers—molecules that have the same mass but differ only very slightly—MS/MS can struggle. The position of a hydroxyl group on a molecule, for instance, or in the case of sugars, which have enormous isomeric complexity, distinguishing just the linkage between two sugars can be extremely difficult. Molecules that are so similar can fragment in similar ways, making it hard to distinguish isomers by their MS/MS spectra.
There was an interesting paper within the last year by a group in Nijmegen. Often, to interpret MS/MS spectra, you use calculations to predict what the fragmentation products would be. They compared these predictions with actual measurements of fragments made using spectroscopy, and found that, in a large majority of cases, the calculations were simply wrong. It’s just too difficult to calculate fragmentation patterns accurately enough to be certain about assignments using MS/MS techniques.
By contrast, the infrared spectrum of a molecule cooled to very low temperatures provides an absolute, distinguishing fingerprint. No two molecules will have exactly the same spectrum if measured at high enough resolution. A cryogenic infrared spectrum, using the technique we call CIRIS—cryogenic infrared ion spectroscopy—can even distinguish subtle differences, like whether a hydroxyl group is pointing up or down. That small difference is enough to shift the vibrational frequencies of a molecule and differentiate one isomer from another.
This makes the technique extremely sensitive, and it also gives a high degree of certainty in identification. It introduces a completely new data dimension. Earlier, you asked me about my controversial statement regarding biomolecular analysis, when I said there are no new technologies. What I meant is that there hasn’t been a new data dimension for analyzing molecules—until now.
With a cryogenic infrared spectrum, you add that new dimension. In addition to retention time, mass, and fragmentation patterns, you now gain an independent, orthogonal measurement. For every mass, you get a cryogenic infrared spectrum—a barcode or fingerprint of that molecule. It’s truly a new dimension in biomolecular analysis.
David Brühlmann [00:17:01]:
So, in layman’s terms, it’s almost an orthogonal method—another dimension of data that complements traditional mass spec.
Tom Rizzo [00:17:12]:
Exactly. You get all the information from traditional methods, plus this new dimension. And we’ve optimized the technique so that it doesn’t significantly increase measurement time—you get this extra layer of data essentially for free.
David Brühlmann [00:17:29]:
How well does this integrate into traditional workflows? Many biotech companies use LC-MS setups—would this method be compatible or disrupt their processes?
Tom Rizzo [00:17:43]:
If I can digress for a moment: one lesson I’ve learned moving into industry is that having the best technique isn’t enough. People have established workflows, and changing those is very difficult. Going into a pharmaceutical company and telling scientists to do things differently can be a tough sell.
So our goal is to integrate as seamlessly as possible. We don’t yet have a commercial instrument, but we’re collaborating with Agilent to bring our technology to the market. In the meantime, we have a purpose-built instrument in our lab.
The idea is that you can continue your existing workflow—run LC-MS as usual—but inside the mass spectrometer, a laser scans as the peaks elute. You analyze the mass and measure the infrared spectrum simultaneously. The perturbation to the workflow is almost invisible. On your screen, you’ll see the new infrared spectrum plotted alongside your regular data.
Modern lasers make this feasible. When I was a PhD student, lasers were huge, complicated, and required a physics degree to operate. Now, the laser is fully integrated inside the instrument—there’s nothing to adjust. It’s completely transparent and doesn’t impede the workflow. It simply adds this new dimension of data.
David Brühlmann [00:19:31]:
That wraps up part one with Tom Rizzo. We explored the fundamental science behind cryogenic infrared ion spectroscopy and how it addresses real analytical challenges in the lab.
In part two, we’ll dive into practical applications for biomarker discovery, therapeutic development, and how this technology is becoming accessible for biotech companies like yours.
If you found this episode valuable, please leave us a review on Apple Podcasts or wherever you listen. Thank you for joining us on your journey to bioprocess mastery. For additional bioprocessing tips, visit us at www.bruehlmann-consulting.com.
Stay tuned for more inspiring biotech insights in our next episode. Until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Tom Rizzo
Tom Rizzo earned his PhD in Physical Chemistry from the University of Wisconsin–Madison in 1983 after completing his undergraduate studies at Rensselaer Polytechnic Institute. Following postdoctoral work at the University of Chicago, he joined the University of Rochester and later became a Professor of Chemistry at the École Polytechnique Fédérale de Lausanne (EPFL), where he also served as Dean of the School of Basic Sciences.
His research combines laser spectroscopy, ion mobility, and mass spectrometry for biomolecular analysis. After retiring from EPFL in 2023, he became Chief Scientific Officer at ISOSPEC Analytics, a start-up focused on biomarker discovery. His honors include the Bourke Award, Ron Hites Award, and an ERC Advanced Grant, and he is a Fellow of the APS and AAAS.
Connect with Tom Rizzo on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
Carbon neutrality pledges echo across the biopharma industry, but the question lingers: how do you actually measure and shrink your true environmental impact, when most data is missing and every facility operates on a different baseline?
In this episode from the Smart Biotech Scientist Podcast, David Brühlmann welcomes Niklas Jungnelius, a veteran in process modeling and sustainability at Cytiva, who’s spent years uncovering what really drives emissions—and how small process changes can have outsized effects.
Typically, when you do a life cycle assessment (LCA), you look at different damage categories. There are many aspects of environmental sustainability—perhaps the most important one at this point in time being carbon emissions.
But if you’re producing your product in an area where there’s a water shortage, that may be your focus area, because our industry is highly water-intensive. Whereas if you have plenty of water available, you may focus less on that.
Then there are other aspects, such as resource depletion. How many resources are you consuming in your process? It’s quite obvious, with all the single-use plastics we generate, that consumables are a major focus area.
David Brühlmann [00:00:45]:
Welcome back. In part one, Niklas Jungnelius opened our eyes to the hidden economics of bioprocessing. Now, in part two, we’re shifting gears to tackle the elephant in the room—sustainability.
How do you actually quantify environmental impact? What’s stopping biotech from hitting those ambitious carbon-neutral targets? And with AI and continuous manufacturing changing the game, how do you model technologies that barely have historical data? Niklas brings his process modeling expertise to answer these burning questions. Ready to future-proof your bioprocessing strategy? Let’s go.
I’d also like to focus on another part. Obviously, process economics is important. The other key area is the environment. Many leading biopharma companies have set ambitious sustainability goals—being carbon-neutral by 2030 or 2040, depending on the context. What are the biggest technical and economic hurdles preventing biotech companies from reaching these goals?
Niklas Jungnelius [00:03:06]:
That’s an interesting question. I think if we focus on the relative targets that many companies have set—such as an 80% to 90% reduction in CO₂ emissions compared to a given baseline year—it really depends on what your baseline is.
And that’s not as easy to determine as it sounds, because there’s still a lot of missing data. When trying to assess your true carbon emissions, we often have to make assumptions, and over time we’re getting more and more accurate data. So things may shift slightly—it’s somewhat of a moving target.
But the baseline largely determines what your viable reduction options are. For example, if your baseline situation involved heavy use of fossil energy in your operations, that will have a very high impact on your carbon footprint. In that case, the most obvious way to reduce emissions would be to switch to renewable energy sources, which can make a huge difference in your CO₂ footprint and substantially cut emissions.
On the other hand, if your manufacturing facility was already using renewable energy in the baseline year—say, in Switzerland, for example, where I know, David, that a large portion of the electricity grid is renewable—you already have a cleaner baseline profile. But that also means it’s much harder to achieve those 80–90% reductions.
So you’ll need to focus more on other factors—typically consumable-related emissions. Those are your Scope 3 emissions—the emissions you’re not directly responsible for, but that come from your supply chain and vendors.
As a supplier to the biopharma industry, Cytiva has to work extensively not only to introduce new, more sustainable products with lower carbon footprints, but also to reduce the emissions of existing products. Our customers can’t simply replace every product in their process to make it more environmentally friendly.
So if we, on our end, can reduce emissions from existing products—by, for example, switching to renewable energy, working closely with our suppliers, or identifying alternative raw materials—we can significantly help our customers reach their sustainability targets.
David Brühlmann [00:05:33]:
If we look at the consumables or raw materials we’re using, what comes to mind is obviously single-use plastics—that’s where a lot of the CO₂ footprint lies. But biopharma also uses a lot of water. Are these really the main drivers, or not? I’m curious—what are the real drivers?
Niklas Jungnelius [00:05:54]:
I think it depends on which damage category you’re looking at, because when you perform a life cycle assessment (LCA), you evaluate several different impact categories.
There are many aspects of environmental sustainability — perhaps the most important one at this point in time being carbon emissions. But if you’re producing your product in a region facing water scarcity, then water conservation becomes a key focus area, since our industry is highly water-intensive.
Whereas if you have plenty of water available, you might focus less on that. Then you also have factors like resource depletion — how many resources are you consuming in your process?
If we stay with carbon footprint for a moment, I think that’s a great point — it’s top of mind for most people. It’s very obvious, with all the single-use plastics generated in our processes, that consumables are a major focus area.
However, I would argue that based on the data we have today, our main focus shouldn’t necessarily be on recycling. Even though it’s tempting — and very visual — to focus on those piles of plastics, the relative benefit from recycling, compared to emissions generated during production, is actually quite small.
Also, because biopharma is a relatively small global industry, we face logistical challenges — such as transport emissions — in the recycling chain for these materials.
In contrast, during the production phase, we have large waste streams, and manufacturing in cleanrooms is highly energy-intensive. I think that’s where we’ll find our true hotspots.
And one thing I should add about environmental sustainability is that it often surprises people when they conduct an LCA and discover what the real impact drivers are. Sometimes it turns out to be a single, obscure chemical you’d never expect — yet it has a disproportionate environmental impact. If we can eliminate or replace that chemical, it can make a huge difference to the overall sustainability of a product.
David Brühlmann [00:08:00]:
Let’s get tactical here — some of our listeners working in process development or manufacturing might be thinking, “That’s great, I want to monitor this, but where do I start?” Can you give some advice on how to begin and how to measure these key parameters?
Niklas Jungnelius [00:08:18]:
I think there’s a lot of potential to work more with process modeling. We’re seeing many large biopharma companies now integrating process modeling early in development — to design smarter, more efficient processes.
One key principle is to think manufacturing from the start. In process development, the time horizon is often limited — especially if you’re a startup without the resources or expertise to fully map your future manufacturing process. But starting with the end in mind — including commercial-scale production — leads to better decisions along the way.
You also need to define the scope and ambition level of your modeling. What benefits do you want to achieve? How much effort is it worth investing at different stages?
Should you model individual unit operations or entire processes? That may depend on whether you already have a platform process.
The level of detail in your models determines how you structure your organization. If you want broad adoption — for example, having people in every PD or MSAT lab trained on process modeling — then the models need to be relatively simple.
But if you want high accuracy and detailed parameter tracking, you’ll likely need dedicated experts who work with process modeling daily. It’s difficult for someone to stay up-to-date with all inputs and assumptions if it’s just an add-on to another role. So for higher precision and more robust results, it’s worth having a few specialists focus on process modeling full time.
David Brühlmann [00:10:05]:
Niklas, what burning question haven’t I asked that you’re eager to share with our biotech community?
Niklas Jungnelius [00:10:13]:
I’d say that there’s no single manufacturing technology that’s superior in every situation. It really depends on your inputs and assumptions.
For example, if you compare fed-batch with continuous manufacturing, the outcome changes drastically depending on the starting titer. If you double the titer, that could make or break the business case entirely.
So asking, “What’s the best manufacturing technology?” is a bit like asking, “What’s the best way to get to work?” The answer is always — it depends. It depends on your traffic, how far you live, what transportation options are available, and how much you’re willing to spend on commuting.
Similarly, in manufacturing, understanding the implications of each option helps you make the best decision for your specific needs. Process economic modeling can guide you toward the manufacturing strategy that best aligns with your individual objectives.
David Brühlmann [00:11:12]:
And if we look ahead, Niklas, is there any emerging trend in bioprocessing that could completely change our models or assumptions?
Niklas Jungnelius [00:11:22]:
I think what’s shifting now is how we work with technologies like intensified fed-batch, where productivity in batch bioreactors increases by front-loading the cell expansion phase.
As I mentioned, titer is a key parameter here — and these improvements can actually shift the balance back toward batch manufacturing in certain cases. We’re also seeing hybrid processes — for example, perfusion bioreactors combined with batch downstream processing.
Looking further ahead, we might see new manufacturing technologies such as cell-free expression systems. If we’re talking about transformational change, that’s where it could happen — but those technologies are still quite uncertain and varied, each requiring significant optimization for specific manufacturing contexts.
David Brühlmann [00:12:13]:
We’ve covered a lot of ground today. From everything we’ve discussed, what’s the most important takeaway you’d like listeners to remember?
Niklas Jungnelius [00:12:22]:
From my perspective, it’s about understanding that depending on your focus areas, prerequisites, and ambitions, different solutions will suit your manufacturing process differently. And perhaps, David, I’ll turn the question back to you — have you learned anything today that you think could be useful in your work as a strategic advisor to the biopharma industry?
David Brühlmann [00:12:48]:
Absolutely — that’s a great question, and I like it. One thing I’ve learned is that there’s really no one-size-fits-all solution. It depends on your scale and on the phase you’re in — whether you’re still in the clinical stage or moving closer to commercial manufacturing. The choices you make will differ greatly depending on that context — for instance, whether you opt for single-use systems versus stainless steel, or for smaller versus larger-scale operations.
Another important factor is your modality — ultimately, the volume required and the process productivity determine what’s most suitable. As you said, Niklas, it’s critical to consider these parameters right from the start. Ideally, you aim for as high productivity as possible in your process, because that’s where you can gain a lot — in both time and cost savings.
Niklas Jungnelius [00:13:40]:
Thanks a lot, David. That’s a very good answer — I’m very happy with that.
David Brühlmann [00:13:46]:
That’s great — I passed the test!
Niklas Jungnelius [00:13:48]:
Yes, you did!
David Brühlmann [00:13:51]:
This has been fantastic, Niklas. Thank you for your insights — and for that thought-provoking question at the end. I loved it. Where can people reach you if they’d like to connect or learn more?
Niklas Jungnelius [00:14:03]:
The easiest way is to connect with me on LinkedIn. Perhaps you can include the link in the podcast notes.
David Brühlmann [00:14:09]:
Sure — we’ll do that. Just check out the Smart Biotech Scientist show notes for this episode. You’ll find Niklas’s contact information there. And please, take the opportunity to reach out to Niklas with your process modeling or bioprocess optimization questions. And once again Niklas, thank you so much for being on the show today.
Niklas Jungnelius [00:14:26]:
Thank you so much, David. It's been a pleasure and happy to connect with anyone who wants to discuss more about this very interesting topic.
David Brühlmann [00:14:34]:
This wraps up our conversation with Niklas Jungnelius from Cytiva. From quantifying environmental footprints to navigating emerging technologies, Niklas has given us a true masterclass in thinking beyond just cost of goods. Remember — every process decision you make today shapes both your economics and your environmental impact tomorrow.
If this episode sparked new ideas, do us a favor and leave a review on Apple Podcasts or your favorite podcast platform. Your feedback helps us serve you better — and thank you for tuning in! Until next time, keep doing biotech the smart way. All right, smart scientists — that’s all for today on the Smart Biotech Scientist Podcast.
Thank you for joining us on your journey toward bioprocess mastery. If you enjoyed this episode, please leave a review on Apple Podcasts or your preferred podcast platform. By doing so, you help us empower more scientists like you. For additional bioprocessing tips and insights, visit us at www.bruehlmann-consulting.com
Stay tuned for more inspiring biotech conversations in our next episode. Until then — let’s continue to smarten up biotech!
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Niklas Jungnelius
Niklas Jungnelius serves as Process Modeling Leader at Cytiva, guiding biopharmaceutical manufacturers, industry groups, and internal stakeholders in evaluating the impact of various process technology options. His work supports organizations in making strategic choices that enhance process efficiency, productivity, and environmental performance.
Niklas earned his master’s degree in Chemical Engineering from Chalmers University of Technology and has more than 25 years of experience in the life sciences sector, including over a decade in strategic roles at Cytiva and GE Healthcare.
Connect with Niklas Jungnelius on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us
Is your manufacturing strategy bleeding money in ways you can’t see? The production floor is full of hidden cost traps—capital investments, labor, resin lifetime, and facility flexibility—that often dictate the business fate of biologics and biosimilars.
This episode of the Smart Biotech Scientist Podcast turns the spotlight on process economic modeling: the tool that’s reshaping how manufacturers understand and control cost drivers behind monoclonal antibody and biologics production.
In this episode from the Smart Biotech Scientist Podcast, David Brühlmann meets Niklas Jungnelius, Process Modeling Leader at Cytiva, a global biotechnology leader dedicated to helping customers discover and commercialize the next generation of therapeutics.
The key cost drivers — two of them would be the scale of manufacturing and process productivity. The production volume is critical, as we see a very strong benefit from economies of scale when looking at the manufacturing cost per gram of product or per dose produced.
With a larger facility and larger bioreactors, you can reduce factors such as labor cost and capital cost investment per gram of product produced.
Similarly, if you have higher productivity in bioreactors — meaning higher titers — that’s also very helpful in reducing overall cost.
David Brühlmann [00:00:41]:
Ever wonder if that expensive new technology will actually save you money in the long run — or which process parameters are secretly eating your budget alive? Today, we're diving into the world of process modeling with Niklas Jungnelius, who is a Process Modeling Leader at Cytiva.
With over 25 years in the life sciences industry, Niklas helps biotech companies make smarter decisions about manufacturing economics. Whether you're comparing fed-batch vs. perfusion or trying to justify a capital investment, this episode will change how you think about process costs. Let's jump in. Niklas, welcome — it’s great to have you on today.
Niklas Jungnelius [00:02:37]:
Thank you so much, David. Glad to be here.
David Brühlmann [00:02:40]:
Niklas, share something you believe about bioprocess development that most people might disagree with.
Niklas Jungnelius [00:02:47]:
Sure, David. I would say that I personally do not believe that fully continuous bioprocessing will become the dominant manufacturing mode for new mammalian cell culture processes within the next 15 years.
That may go against what many people in the industry expect, but the main reasons I think this way are as follows:
For these reasons, I believe that intensified fed-batch processes — offering many of the performance advantages of continuous without the same level of complexity — will remain a very attractive and practical option for most companies moving forward.
David Brühlmann [00:04:02]:
I’m curious, Niklas — what originally drew you into life sciences, and ultimately to process modeling? What were some interesting steps along your career path?
Niklas Jungnelius [00:04:15]:
Yeah, I think it really started back in high school. During my final year, we had a new biology and biochemistry teacher — a man named Anders — who had previously worked at a university doing both research and teaching. He told us all these fascinating things about new methodologies and tools emerging within biopharma and life sciences, and that really caught my attention.
Originally, I had planned to study computer science, but I changed my mind and instead applied to university for chemical engineering and biochemistry. After completing my degree, I explored a few different areas — I worked in marketing, customer service and support, and commercial operations. But about 14 years ago, I applied for a position within what was then GE Healthcare’s Strategic Marketing organization. I was hired and immediately felt at home, initially working with portfolio strategies for chromatography products.
Over time, I expanded my scope to include all types of downstream operations, and as part of that, I became involved in process modeling. I found it fascinating to understand how different technological and operational choices affect performance and economics, and to assess the value and impact of those choices. That analytical aspect really intrigued me, and I wanted to work more closely with it — and also more closely with our end users.
When the company transitioned to Cytiva, I was asked to lead our work in process economic modeling. That sounded very appealing, and in that role, I now collaborate with both internal stakeholders — helping assess the impact and benefits of new technologies, supporting product portfolio roadmapping and technology evaluations — and with external customers, helping them make the best process technology choices for their specific needs. That’s what I do today.
David Brühlmann [00:06:23]:
For those listening who might be new to process modeling, can you explain what it actually involves? What does it do, what’s the purpose, and what are you essentially building when you create these models?
Niklas Jungnelius [00:06:40]:
Yes, absolutely. First of all, I like to clarify that what I do is process economic modeling, not mechanistic modeling. Mechanistic modeling focuses on simulating physicochemical properties — for example, how a chromatography separation behaves under different conditions. In contrast, process economic modeling looks at the manufacturing process from an economic and capacity perspective — things like cost structure, equipment utilization, and increasingly, environmental sustainability.
In these models, we account for all key variables that influence process efficiency and output. The questions can be quite diverse. Sometimes we focus on individual unit operations, which we might model in Excel — that gives us full control over the parameters, calculations, and outputs.
Other times, we model entire manufacturing processes end-to-end. For that, we typically use third-party software, most often BioSolve (by Biopharm Services). BioSolve includes a lot of predefined parameters — which can be customized — and allows us to evaluate both the performance of individual steps and the downstream impact when one operation changes.
In all cases, these models rely on mass balance calculations — we track how much product flows through each step, the duration of each step, and from that, determine equipment sizing, consumable requirements, labor needs, and so on.
We also include assumptions about labor costs, capital costs, and facility investments, which together allow us to estimate the total cost of goods (CoG) with reasonable precision. When modeling environmental sustainability, we extend this by analyzing consumable usage, material composition, and facility requirements — such as cleanroom areas and air handling needs. For instance, larger cleanrooms require more air changes per hour, which in turn drives energy consumption. So all these factors come together to give a holistic view of both economic and environmental performance.
David Brühlmann [00:09:21]:
For example, the cost of manufacturing — or more precisely, the cost of goods (CoG) — has gained a lot of attention in recent years, especially as cost pressures on biologics have increased.
Before diving into the details, can you give us a high-level picture of the main cost drivers and how they influence biologic production — or even the price patients ultimately pay for life-saving therapies?
Niklas Jungnelius [00:09:55]:
Yeah, that’s a good question. To answer it properly, I need to take a step back and not just look at modeling the manufacturing cost, because as you know, there are many other costs involved in biopharmaceutical development.
For innovator molecules, the price per dose is not primarily determined by the manufacturing cost — the margins are typically high. That’s because companies need to recover the significant investments made in R&D, clinical trials, regulatory approvals, and commercialization. So traditionally, in the case of innovative biologics, the main cost pain point for biopharma companies has not been at the manufacturing stage.
However, we are now transitioning into a market with an increasing number of biosimilars being introduced. And perhaps I’m a bit mAb-focused here — since monoclonal antibodies are where we do most of our work — but in the biosimilar segment, the landscape is much more competitive. Margins are lower, and the cost to bring these products to market is significantly less than for new, innovative therapies.
As a result, manufacturing cost becomes a much more critical factor for biosimilar producers. Being able to reduce production costs directly improves competitiveness against other biosimilar manufacturers.
That said, whether we’re talking about innovator biologics or biosimilars, the top priority remains having a robust, high-quality process with sufficient production capacity to ensure uninterrupted product supply. Any supply interruption or manufacturing downtime can be extremely costly and damaging to both the business and patients.
David Brühlmann [00:11:33]:
From your modeling work, Niklas — especially as you mentioned for mAbs (monoclonal antibodies) — what parameters have the biggest impact on mAb production costs? And have you found any surprising insights there? Let’s start with fed-batch processes, and then we can move on to other process formats.
Niklas Jungnelius [00:11:51]:
I’d say the key cost drivers — two of the most important — are manufacturing scale and bioreactor productivity. The production volume has a huge impact because we see very strong economies of scale when we look at the manufacturing cost per gram of product or per dose. With a larger facility and larger bioreactors, you can reduce costs like labor and capital investment per gram of product produced.
Similarly, if you achieve higher productivity in your bioreactors — meaning higher titers — that’s extremely helpful in lowering overall costs. This is true for fed-batch manufacturing, but it’s even more critical for perfusion processes, where media consumption represents a much larger portion of the total cost. If you can make that operation more efficient — for example, producing more product per gram of media consumed — it significantly improves the process economics.
We should also mention the difference between stainless steel and single-use facilities. In stainless steel setups, you have substantial capital investment costs, as well as higher labor costs due to all the cleaning and line clearance activities required.
In contrast, single-use processes require significant spending on plastic consumables — such as bags, tubing, and connectors — but they bring major advantages. They reduce labor needs, shorten turnaround times between batches, and increase facility throughput, which helps improve overall productivity and agility.
David Brühlmann [00:13:33]:
Right, and I imagine that while cost is one driver, single-use systems also give you a lot of flexibility. So it’s always a trade-off between running and maintenance costs versus the flexibility you gain — and also the scale at which you operate.
Niklas Jungnelius [00:13:52]:
Yes, definitely. Over the past couple of decades, we’ve seen a major shift toward single-use technologies precisely because of that increased flexibility.
As you said, you have lower facility construction costs, shorter lead times to get capacity online, and generally more agile operations. At the same time, I’d like to point out that we still see many large stainless steel facilities being built — especially in regions like Korea, the U.S., and Europe — where companies are investing in 10,000-liter or larger stainless steel bioreactors. That’s because the economy of scale at that size is so favorable; for very high-volume products, large stainless steel production can be extremely competitive.
David Brühlmann [00:14:46]:
What about the intermediate scale — say, up to 2,000 liters — where you could use either stainless steel or single-use systems? Are there situations where one is clearly preferable, or is it more of a case-by-case decision?
Niklas Jungnelius [00:15:07]:
It’s very much case by case, but generally speaking, the smaller the scale, the more advantageous single-use becomes.
That’s because stainless steel facilities involve very large fixed investments, almost regardless of scale. And just to clarify, I’m mainly referring here to mammalian cell culture capacity — the picture might look quite different for microbial fermentation.
The flexibility of single-use systems has tremendous value, even if it’s not always easy to quantify. Having the freedom to make late-stage decisions or bring new capacity online faster can reduce the safety margins you need in your supply chain. That agility can make a big economic difference over time.
David Brühlmann [00:15:48]:
Now let’s talk about continuous manufacturing. Even though you’ve expressed some reservations — that not everyone will adopt it because of the added complexity — it does offer economic advantages, especially when moving toward intensified or even end-to-end continuous processes.
From a purely economic standpoint, what are the main differences between these process types? What are the advantages and drawbacks?
Niklas Jungnelius [00:16:20]:
If we start with fed-batch, in the classical setup, your bioreactor productivity isn’t that high because much of the run time is spent on cell growth — expanding the cells to a suitable density before significant product formation begins.
Once you reach downstream processing, you typically have large batch volumes that you can process efficiently — so your downstream utilization is high. However, if you only have a single bioreactor feeding downstream, you’ll still have idle time between runs, which lowers your overall facility utilization and increases your capital cost per gram.
In a perfusion or continuous process, your bioreactor operates at high productivity, delivering product continuously to downstream. But here, the downstream line becomes the limiting factor — it can only operate at the same throughput as upstream. You don’t gain much benefit from trying to speed up individual steps, because everything is connected in real time.
For example, in a continuous mAb process, you might have an upstream productivity of 1 g/L/day, feeding into the capture step. After Protein A capture, you concentrate the product — say, from 1 g/L to 10–20 g/L — which reduces the downstream process volume dramatically. So, if your bioreactor produces 1,000 liters per day, that might translate to only about 2 liters per hour through the later purification steps. It’s a very slow, “dripping” flow, which isn’t the most efficient way to run those unit operations.
You could add multiple perfusion bioreactors to the same downstream line to improve utilization, but that brings its own technical and operational challenges. So, in summary, the key question is: Where do you want your productivity gains? In fed-batch, you can keep your downstream line fully utilized by staggering multiple bioreactors. In continuous, you gain upstream efficiency but lose some flexibility in downstream.
One additional point — in clinical or low-frequency manufacturing, resin cost becomes a major factor. Chromatography resins are a sort of hybrid cost item — part consumable, part capital investment — with long lifetimes, often up to 100–200 cycles.
If you only run a few batches and can’t fully use that lifetime, the effective resin cost per batch becomes very high. In continuous or perfusion processes, you can use smaller columns and cycle them repeatedly over longer runs, which reduces the total resin requirement and spreads out the cost more efficiently. So resin cost tends to be a major driver in fed-batch processes, but less so in continuous ones.
David Brühlmann [00:20:04]:
That’s a really good point about resin lifetime, especially for clinical-scale manufacturing. Beyond that high resin cost, are there other hidden costs that companies might overlook before doing proper process economic modeling?
Niklas Jungnelius [00:20:20]:
Yes, absolutely — there are several. Some are hard to capture directly in a model, like risk of process failure or the value of operational flexibility. We can include proxy parameters for those, but there’s always a subjective element.
Another common oversight is the cost of underutilized capacity. When launching a new product, companies often build facilities designed to meet future demand projections — say, five years down the line. But if actual sales don’t reach those forecasts, you end up with idle capacity, which is expensive unless you can repurpose it for other products.
This is where flexible, single-use facilities really shine — with shorter build times and modular capacity, you can align production more closely to real demand, avoiding excessive safety margins on capacity.
Also, from a modeling perspective, it’s easy to over-optimize. As a process modeler, I often work in an idealized world — assuming everything performs exactly as expected. But in reality, you need to build in safety margins to account for process variability and unforeseen issues.
So my advice is: make realistic assumptions, don’t over-optimize, and always include reasonable safety factors in your calculations. That gives you a more credible and practical cost estimate.
David Brühlmann [00:22:32]:
Thank you for tuning in. Today we covered the fundamentals of process modeling and the critical cost drivers that can make or break your manufacturing strategy. In Part Two, we’ll dive into sustainability modeling and emerging technologies that are reshaping bioprocess economics.
If you found value in this conversation, please leave us a review on Apple Podcasts or your preferred platform — it helps other biotech professionals discover these insights.
Alright, smart scientists — that’s all for today on the Smart Biotech Scientist Podcast. Thanks for joining us on your journey toward bioprocess mastery. If you enjoyed this episode, please leave a review and visit smartbiotechscientist.com for additional resources and tips. Stay tuned for more inspiring biotech insights in our next episode — and until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Niklas Jungnelius
Niklas Jungnelius is the Process Modeling Leader at Cytiva, where he advises drug manufacturers, industry associations, and internal teams on the implications of different process technology choices. He helps organizations make informed decisions to achieve goals in process economy, productivity, and environmental sustainability.
Niklas holds a master’s degree in Chemical Engineering from Chalmers University of Technology and brings over 25 years of experience in the life sciences industry—half of which has been spent in strategic roles at Cytiva and GE Healthcare.
Connect with Niklas Jungnelius on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us