Complex automation often arrives wrapped in hype, but the reality is more nuanced. Biotech teams wrestle with CMC development, data validation, and the balancing act between risk and innovation. Nobody wants to drown in complexity or bankroll the latest tech trend that solves nothing. So: what matters, what’s just noise, and how do you build systems that actually elevate the process?
This episode features Anthony Catacchio, CEO of Product Insight and veteran in new product development for medical devices, warehouse logistics, and bioprocess automation. Anthony Catacchio brings a practical, systems-minded lens—grounded not in technology for its own sake, but in designing solutions that fit real-world lab and manufacturing workflows.
Key Topics Discussed
- Bioprocess hardware development is often underestimated, causing unclear problem definitions and overengineered solutions.
- Phased robotics and AI development reduces risk, validates assumptions early, and prevents overwhelming teams.
- Industrial robotics principles are moving into biotech labs, where adaptation beats building from scratch.
- Cross-industry experience shows the importance of process-first thinking in system design.
- Bioprocess automation optimizes material flow while protecting and amplifying expert roles.
- Advances in vision systems and AGVs expand automation in high-value lab environments.
- Effective automation starts with rigorous problem definition and clear performance goals.
- Avoiding hype and unnecessary complexity ensures stronger ROI and long-term client trust.
Episode Highlights
- When custom robotics development is genuinely justified — and the conditions that determine whether a large-scale automation investment makes sense for your organization [02:59].
- Tech demos and usability demos: how to test the hardest parts of your system concept in isolation before committing to full development [06:37].
- Minimum testable product vs. minimum viable product: why rushing to viable in hardware development is a costly mistake, and how controlled pilot deployments generate the learning that actually accelerates your program [07:37].
- Why testing in the real operating environment — not a simulated lab setting — is the only way to surface the hidden requirements that will determine whether your automation succeeds or fails [08:29].
- The "go fever" trap: why problems discovered late in development get buried rather than fixed, and how front-loading validation protects both your timeline and your budget [10:16].
- The single most practical question a biotech scientist can ask to determine whether a process is a genuine automation candidate: how much are you thinking while you do it? [16:02].
- Where AI and machine learning deliver real value in bioprocess research — and why the more urgent question is not how to automate a process, but how to redesign it to produce better data [17:59].
- Why capital equipment in biotech labs will need to change fundamentally to collect the volume and quality of data required to make AI-driven insights meaningful [19:01].
In Their Words
You really need to find your problems when you're still at a whiteboard. Once you've developed all the software and done all this work, if your iterations are too slow, you just don’t learn these lessons until it’s too late. And the later it is in the process, the longer it takes to fix and the more financially painful it becomes.
By front-loading as much validation as possible and really pushing to create data — wherever that data comes from — it doesn’t really matter. We’ll always design the most appropriate experiment for the project. But you have to have that data. You have to be willing to try and fail.
Why Most Bioprocess Automation Projects Fail Before the Robot Is Even Ordered - Part 2
David Brühlmann [00:00:38]:
Welcome back to our conversation about robotics and automation. In Part 1, Anthony Catacchio from Product Insight explained his phased approach to hardware development and why a clear problem definition prevents over-engineering solutions.
Now we confront the hard questions. Where does AI genuinely transform bioprocess automation beyond buzzwords? How do you validate functionality through minimal testable products without premature scaling? What does the discovery phase actually uncover about variable bioprocess conditions? And critically, when should early-stage biotech companies automate versus staying manual?
Let’s separate automation wisdom from expensive mistakes. Let’s assume you now have a problem that’s worth solving — one that could cost several hundred thousand dollars or even millions. What is your strategy to develop that solution? Do you follow a minimal viable product approach? Do you focus on prototyping? Do you leverage existing technologies? How do you approach it?
Anthony Catacchio [00:02:59]:
It depends massively. If we're looking at doing custom robotics, you have a really high-value problem and there's just nothing on the market that fundamentally works. And we need to make custom mechanical assemblies or custom software or whatever it is, generally speaking, and we have a fair amount of experience doing kind of those first-party tools. I have 500 locations that do this operation all day, every day. And so it makes sense for me to invest the money to build the right thing and to build it myself.
There's a fine line in terms of how big your organization has to be and how much work you need to be doing to do that kind of project. But if you're doing that and you're doing kind of a larger scale automation initiative across multiple sites. Generally speaking, the way we work and the way we try to run those projects and develop those technologies is to, again, do the upfront work with the system concept development and then do some kind of requirements validation. And this varies depending on what the product is. But like I talked about the system concept development, in a lot of ways, that's really just about trying to make sure we really understand the requirements, that we have them all. And we show those concepts to a bunch of different users, a bunch of different stakeholders, because again, their feedback feeds those requirements.
And when you show somebody a solution, you get a lot better information than asking them just a generic question. And so we go through that, then we'll often build what we call a tech demo and a usability demo is kind of how we think about them. And the goal there is to take how we plan to solve the hardest parts of the technology problem and essentially just make sure they're feasible. Again, it varies massively depending on what the program is because it's one of the ways we leverage our expertise and experience is to say which parts of this problem are actually hard. What technology or what mechanism or what operation, if it doesn't work the way we think it's going to work, means none of this works. So go find those 3 or 4 things that, yeah, this is a little weird.
We're pretty sure it's going to work, but if we're wrong, we can't get back from that. Those 3 or 4 things that kind of underpin your system concept and underpin your architecture, go and build and test those for real. But only that little piece. We don't need to build a sheet metal box. We know we can make a sheet metal box, but do we know this mechanism will work right with this? We're trying to detach and attach hoses that were actually designed for humans to do. Can we actually do that? And does the way that we're intending to do it actually work repeatedly? And we might need to prove that up front.
And then on the user side, usually we'll build a completely functional system from the user's perspective. That's a complete lie inside. There might even be a person standing behind a wall that's pretending to do the things that the automation will do. It's sort of like the caricature of the tech demo that a lot of startups do to raise money. Generally, we're doing it very transparently. We're telling the client, look, we have developed any of this stuff yet. We just want to make sure that when we do, it's going to work for users. It's going to accomplish the process. And you can do that in a couple of different ways. If you're doing things for a bioprocess thing and you're saying, well, we're going to put a robot here and it's going to move like this and it's going to be this kind of robot.
I can make a person behave like a robot, right? I can give a person the limitations that that robot will have and I can test the system with a bunch of humans because I don't have to develop anything. I can just tell them what their capabilities are and maybe I'll 3D print some stuff to make their hands less useful. Yeah, you gotta pick it up with something that looks like an end effector. We'll find a way to fake it so that we can actually validate that our solution will do what we think it's gonna do and that we've found all the requirements. And then from there, really our next goal is to move to what we call minimum testable product.
Minimum viable product works if you're developing software. If you're developing hardware and robotics, your product isn't gonna be viable. You shouldn't go all the way to viable before you get to testable. And what we mean by that, particularly if you're doing something that's going to go to higher volume production, hundreds or thousands of something, you want to build a version that very much isn't an engineered product, but works and will work in the real environment. Again, you're not going to test every requirement that way, but you want to develop something that can be tested, can be deployed in the same way that, and we use a lot of parallels to agile software development in our process where you want like continuous deployment and continuous improvement.
With hardware, you really can't just like launch a product and then revise it 7 times. That just doesn't work because you can't over-the-air update hardware. So you really don't want to rush to the end. You don't want to rush to minimum viable. You want to get to minimum testable and then do controlled pilot deployments and learn and iterate. And then from there, usually your next deployment is minimum viable, but again, still controlled in pilot settings.
So we do a lot, a lot of work in more controlled pilots because you really want the information you want the learning, you want the iteration, you want your development team to get that feedback, but you can't just launch a product in these spaces. It just doesn't work. Labs don't want to be your beta tester. So you better have real data before you deploy into production. And the only way to generate that and to really validate that your product works is to build that kind of handheld deployments and real testing into your development pipeline. You can't do it on the backend. You can't beta test with sellable product in this space and software loves that approach.
David Brühlmann [00:08:21]:
That’s a big challenge in bioprocessing. You need to enter the lab with a well-established product.
Anthony Catacchio [00:08:29]:
Right. And that's hard. And part of how we do that right is just by either simulating the environment and building our robot and running our robot through simulated environments and then kind of doing the inverse like I described with people where you're simulating the robot, you're simulating the product in the real environment.
You really need to do both. It's very easy to get trapped in, well, it works in the lab kind of mindset of always have a real product and simulate the environment is a real trap in product development that we see all the time where you get all the way to the end, but you never actually went out to where these things actually get used and tested it, either tested your robotics for real or brought the capabilities that your robotics bring through people or through whatever other means. You didn't test your process. You didn't test your system concepts in the real world. You didn't test your requirements in the real world. And so then when you go to launch or scale or whatever it is you're doing, often you've missed these core requirements that make these things actually work in the real world because there's just nothing like being in the real environment. You're always going to find things the first time you put a product into an environment. You just are. And you have to try to do that early and you kind of have to eat it on the development side. It's not something you can push onto your customers. It's, it's just not a risk anyone wants to take.
David Brühlmann [00:09:47]:
How do you identify these quote-unquote hidden, perhaps, requirements or underlying mechanisms? Because developing the equipment is one thing, that's already quite a challenge. But now in bioprocessing, especially in the upstream process, we work with living cells. So we have a lot of variability, we have a lot of things going on that are independent of the technology, but as we're combining the two, it can get very messy and like very quickly.
Anthony Catacchio [00:10:16]:
Yeah, I mean, a lot of ways that just comes down to your test planning. You have to do a lot of testing. And again, this comes back to our rush to not a minimum viable product, but a minimum testable product. How can we start producing data about what this looks like? Because particularly when you have a lot of variability like that, the only real way out is statistics. I need to know what your yield or your success rate or your failure rate, however you want to look at it.
I need to know what it is today without this product or without these process changes or this automation system applied. And then I need to build a way to simulate this automation system in a world that has all that variability. And so that's where you would potentially take people and put those people in a lab and have them work the way that the automated system will eventually work, because that way you will start to tease out, we can create some data here and we can see that when we do this the way we're talking about doing it in our automated workflow, oh, this kills yields like this. We're missing something here, but in a way that you can iterate very quickly and you're not overinvested before you get there. If you go all the way to everything works and everything's perfect and I've got a shippable product, it's really hard to make those big changes once you find those problems.
And a lot of time what ends up happening is you get kind of like go fever. They talk about in aerospace where you're so far down the line that you find a problem towards the end and no one really wants to fix it because the investors don't want to hear that part of our strategy is political too, to say you've got to find these problems up front or they're just going to get buried because no one wants to say you've got an architectural issue or a process issue 2 months before you launch after you've done all the hard engineering. You've got to find that stuff up front because otherwise you just don't get a chance to fix it. You're going to be wrong. You're always going to be wrong in this world.
Once you put a process into a real environment and you stop simulating the environment, you're going to find stuff that you didn't understand fully, or there's variability that just no one actually understands. That's always the fun one. The people, the operators on the line just compensate for that variability, but no one ever documents it, right? No one ever sees it. You've got people in the loop who just kind of make it work. Often when you put automation in, those people go away and you're like, hey, wait a minute, you were doing stuff that no one knew you were doing. So you really have to take out whatever expertise you think you're going to take out of the system and make sure it still works before you go all the way down the path of fully developing and deploying a product.
David Brühlmann [00:12:40]:
If we zoom out, the ultimate goal is to accelerate bioprocess development and make manufacturing more robust. From your perspective, how does this data-driven development approach accelerate development?
Anthony Catacchio [00:12:58]:
Yeah, it's a lot about what I was just talking about and the idea that you can make changes so quickly if you test early. You really need to find your problems when you're still at a whiteboard. Once you've developed all the software and you've done all this stuff, if your iterations are just too slow, you just don't learn this until it's too late. And the later it is in the process, the longer it takes to fix it, the more painful it is financially. And so by front-loading all of that validation as much as you can and really pushing to create data — and whatever that data, wherever it comes from, it doesn't really matter — we're always going to design whatever experiment is most appropriate for the project, but you have to have that data. You have to try to fail.
The “fail fast” term has gotten very polluted and broken because there's so much in the engineering culture that's sort of grown out of software development, and organizations just sort of expect everything to work the way that software works. And it just doesn't. I mean, the way you fail fast is by putting concepts in front of real users, by running trials where you have people instead of robots. Those are the places where you fail. You want to find those big glaring requirements that you missed. And the earlier you find them, the faster you can fix them.
You really want to — and that's again why we lean so hard on trying to validate our requirements early — because in a lot of ways that's the hard part and it's the only part that matters. If you have the wrong requirements up front, you can engineer the world's most beautiful solution, but it does the wrong thing. It doesn't solve the problem, so no one cares.
And so that's really our focus: making sure we're building the right thing and that we understand the broader sensitivity analysis. You want to make the right thing. You also don't want to try too hard. Those are the two things. Why do you want to know the requirements? You want to know which parts are really important, but you also want to know which parts really don't matter because you need to focus on the right aspects of a technology, on the right aspects of a process.
If you don't need a ton of precision somewhere, then don't build that precision. Don't go to the end of the earth refining exactly the placement of something or temperature control of something if it doesn't actually matter. And so that's really it. That's the key in hardware development as we see it: validating that you really understand the problem and that you understand the requirements of what an optimal solution looks like before you engineer that whole thing.
Sort of just assume that you're wrong upfront and continuously work to prove yourself right with real, statistically driven data. It's kind of a “go slow to go fast” approach. And people bristle at it sometimes — like, what do you mean you're going to spend two or three months just drawing pictures and having people pretend to be robots? It's like, yeah, those two or three months are incredibly valuable. Don't skip those. Don't pretend like you know the answers and just skip right to engineering. You will always regret it. And so that's really how we go fast.
David Brühlmann [00:15:47]:
Let's make this very practical. Is there perhaps one or two questions a biotech scientist could ask to quickly determine whether it's worthwhile doing a more in-depth study about a certain problem — whether to automate or not?
Anthony Catacchio [00:16:02]:
I mean, the biggest question is just how consistent of a process is it and how much of the work — you're a scientist, right? You're highly educated. You know what you're doing. How much of what you're doing on a day-to-day basis is you understanding and solving problems versus doing rote, repetitive tasks?
I am not an AI booster as a general statement. I think there are a lot of technologies that are really interesting going on right now with machine learning and deep learning, and a lot of those things have really great applications. If you know what you're doing and you feel like you're using your skills and your brain to do a task correctly — and that it doesn't work unless you understand what you're doing — chances are that's not a good candidate for automation.
If it's something that — yeah, I do this and when I do it, I'm thinking about what I'm going to make for dinner and what my plans for the weekend are going to be because my brain is off and I'm just moving my body in a way that gets the job done — those things are much, much better for automation.
So in a lot of ways, that's a good way to think about it: how hard do you have to try to do this? How much are you thinking while you're doing this? If you're thinking a lot, chances are automating it is not going to be great because it's a highly variable process and you probably can't ever figure out all the requirements of it.
If it's something that's just rote and repetitive and, man, if I didn't have to do this every day, I'd have another hour to work on problems that actually require my expertise — that's where we want to be from an automation perspective. We want to get that stuff off the plate of these highly skilled researchers because the goal is to get to some sort of treatment that works as fast as possible. That's essentially the goal of biologics development. You want to do as many experiments as you can, as fast as you can. And the more that we can enable researchers to do that, that's really where our value is.
David Brühlmann [00:17:51]:
Before we wrap up, Anthony, what burning question haven't I asked that you're eager to share with our biotech community?
Anthony Catacchio [00:17:59]:
I think one of the biggest things that is sort of on everybody's mind — and I alluded to it a little bit — is this idea of AI in research and how that will develop and where that fits.
I think it's really interesting. Again, there are a lot of really cool applications in robotics, and most of them, like I said, are really around machine vision more than anything else. From a pure research perspective, again, if you have to turn your brain on a lot and problem-solve, AI won't ever really do that — not in the form we have today.
But there are a lot of opportunities for things like data analysis, for things like predicting the outcome of a process, or those sorts of things. One of the things I think a lot of people don't understand about leveraging AI — particularly building first-party models — is the amount of data that you need to produce in order to be able to make those things meaningful.
And I think this will be one of the things we see in capital equipment development and in lab equipment development: building in methods of collecting much, much more data about the processes than we do today.
So I think that's going to be one of the real opportunities. It's not necessarily, “Oh, how do I automate this process?” It's, “How could I change this process so it produces more and better data?” I think that's going to be one of the big questions that biotech labs ask themselves more and more moving forward as everybody leans harder and harder in that direction.
It's going to be about producing data. And again, I am not an AI doomer. I don't think large language models are coming for scientists or for highly skilled labor in that way. But I do think that in order to get value out of those types of services and technologies, capital equipment in particular is going to need to change in a lot of ways to collect far more data than we do today in order to drive that value.
David Brühlmann [00:19:41]:
Excellent. This has been fantastic, Anthony. What is the most important takeaway from our conversation?
Anthony Catacchio [00:19:48]:
In my mind, the most important thing we think about in this world is that if you want to automate something, you need to look at the whole picture. That’s the biggest thing — any kind of automation is really about understanding and codifying a process. So you need to deeply understand the process and the environment to do an effective job. It’s not actually a technology problem. It’s a systems development and requirements development problem.
David Brühlmann [00:20:16]:
Excellent. Thank you so much, Anthony, for coming on the podcast. Where can people connect with you?
Anthony Catacchio [00:20:23]:
LinkedIn or through our website. They can always submit an inquiry there through www.productinsight.com.
David Brühlmann [00:20:29]:
There you have it, Smart Biotech Scientists. You’ll find the links in the show notes. Please reach out to Anthony. And Anthony, once again, thank you so much for being on the show.
Anthony Catacchio [00:20:38]:
Thanks, David. It was great.
David Brühlmann [00:20:39]:
Anthony’s framework reveals a fundamental truth about bioprocess automation. Success isn’t about deploying the most advanced technology. It’s about disciplined discovery, phased validation, and knowing when innovation beats invention.
Teams that skip these principles waste resources on systems that cannot handle real manufacturing complexity. Get the approach right and automation accelerates your program. Get it wrong and you may build expensive failures.
All right, Smart Scientists — that’s all for today on the Smart Biotech Scientist podcast. Thank you for tuning in and joining us on your journey to bioprocess mastery.
If you enjoyed this episode, please leave a review on Apple Podcasts or your favorite podcast platform. By doing so, we can empower more scientists like you.
For additional bioprocessing tips, visit smartbiotechscientist.com. Stay tuned for more inspiring biotech insights in our next episode. Until then, let’s continue to smarten up biotech.
Disclaimer: This transcript was generated with the assistance of artificial intelligence. While efforts have been made to ensure accuracy, it may contain errors, omissions, or misinterpretations. The text has been lightly edited and optimized for readability and flow. Please do not rely on it as a verbatim record.
Next Step
Book a free consultation to help you get started on any questions you may have about bioprocess development: https://bruehlmann-consulting.com/call
About Anthony Catacchio
As Owner & CEO of Product Insight, Anthony Catacchio helps companies translate complex automation challenges into scalable, real-world hardware solutions. With a background spanning engineering leadership and product development, he focuses on structured, phased execution that validates core assumptions before full-scale buildout.
By combining robotics, AI, and disciplined systems engineering, he enables organizations to build and commercialize hardware products efficiently—while minimizing early-stage complexity, cost, and risk.
Connect with Anthony Catacchio on LinkedIn.
David Brühlmann is a strategic advisor who helps C-level biotech leaders reduce development and manufacturing costs to make life-saving therapies accessible to more patients worldwide.
He is also a biotech technology innovation coach, technology transfer leader, and host of the Smart Biotech Scientist podcast—the go-to podcast for biotech scientists who want to master biopharma CMC development and biomanufacturing.
Hear It From The Horse’s Mouth
Want to listen to the full interview? Go to Smart Biotech Scientist Podcast.
Want to hear more? Do visit the podcast page and check out other episodes.
Do you wish to simplify your biologics drug development project? Contact Us

