Behind the Scenes with PPO: How AI is Transforming Foreign Material Detection in Meat Processing

In this episode of Behind the Scenes with PPO, Chief Customer Officer Heather Galt sits down with Kesha Bodawala, PPO’s Machine Learning Lead, to explore the critical role of data and artificial intelligence in the meat industry.

Kesha explains how AI models are developed, tested, and validated to detect foreign materials on the processing line—highlighting why comprehensive training data and customer collaboration are essential to success. Together, they break down complex concepts like the “black box” of AI, the importance of simulating real-world conditions in training, and how plants can prepare for AI-driven inspection.

The conversation also looks ahead at future innovations, including faster GPUs, multi-tasking AI systems, and the potential for AI models to train themselves—opening the door to even greater accuracy and efficiency in meat processing.

Whether you’re curious about AI in food safety, or you want to understand how PPO builds smarter detection systems, this episode offers an accessible inside look at the technology shaping the future of meat processing.

 

Video Highlights:

The Role of Data in the Meat Industry – 1:04

“In a plant, let’s assume that you’re collecting data in different parts of the plants. What you can do with this data is that you can do some kind of trend analysis or some kind of pattern recognition there to understand where your plant is, not doing so well and when it’s doing pretty good, you know. So for example, you can, understand the quality of the products, flowing through the plant at various points.”

Customer Considerations for AI Implementation – 12:29

“What do I need to know to make sure that my plant is successful with detection of raw materials using artificial intelligence, right. So first thing is, it is very important to understand that the model can be as good as the data is. So if the data is missing something, that model will also miss the understanding of that part. That defines how you would talk with a customer about their needs. Um, so if let’s say they want to do foreign material detections, in order to get a realistic understanding of their environment, we need to look at the different variations in their products and foreign materials that’s given.

Video Transcript

Jump to your section of interest:

Introduction to PPO and AI in Meat Processing

Welcome to Behind the Scenes with PPO, a video series that offers a look behind the curtain at how meat processors collaborate with tech companies like PPO and why.

I’m Heather Galt, the Chief Customer Officer here at P&P Optica.

And today, I’m joined by Kesha Bodawala, our machine learning lead here at PPO.

Keisha has a master’s degree in computer and electrical engineering from the University of Waterloo and is a part of our research and development team. She focuses on developing our AI technology which detects foreign materials on the line. There is no one better to help us understand how machine learning models are developed and how they solve some of the biggest challenges in foreign materials detection in the meat industry. So let’s dive in. Welcome, Kesha. How are you?

Good to see you. I’m doing great. Excited to be here.

Awesome. Well, thanks for joining us today. So, Kesha, let’s jump in. And I’m gonna use my notes a little bit because I’m not the technical expert here.

 

The Role of Data in the Meat Industry

Right? So I’m gonna need a little guidance in our conversation, but I really appreciate you joining us. So, Kesha, when most people think about the meat industry, they think about, you know, big equipment, lots of volume. They think about the products that are being processed, but they don’t think a whole lot about the data.

And I’m really curious to see sort of how you see the role of data in the meat industry and why its importance is increasing.

Right. So data is the king, not in just meat and meat industry, in every other industry possible.

So let’s think about a plan.

In a plant, let’s assume that you’re collecting data in different parts of the plants. What you can do with this data is that you can do some kind of trend analysis or some kind of pattern recognition there to understand where your plant is, not doing so well and when it’s doing pretty good, you know. So for example, you can, understand the quality of the products, flowing through the plant at various points.

You can also understand if you are wasting a lot of product, at certain points, in the plant. You can understand, the throughput.

You can track it over time, if the quality is increasing, decreasing, if the throughput is increasing, decreasing, all sorts of stuff. So that is the trend analysis part, of the data. Then there’s this entirely new aspect, which is training AI models using the data you get from the plant.

So think of any question or any task or any problem that you are facing in a plant.

If you collect the correct data in the correct format, there’s a really good chance that you will be able to create an AI model to answer that question or solve that issue.

So that is where the real power of data lies.

Like, you can do unimaginable things using data in your plot.

AI vs. Traditional Data Analysis

So that sounds really cool. And I can think about, you know, trend analysis, like, I’ve got a spreadsheet, maybe I have four materials incidents that have happened over the last five years. So how would AI then take that and take it a step further if we use that specific example of foreign materials? What could I do that isn’t just a trend analysis telling me what’s happened in the past using those AI models?

Right. So to understand how you can use that models to train use that data to train a model, let’s take a step back and let’s think about how our brain works.

So a model is essentially trying to replicate our brain. Our brain, takes information from our surroundings using different sensors like eyes or nose or ears and kind of processes all of that together to create an understanding of what’s happening around us.

Just like that, models kind of take different data points from different angles, and then creates an understanding.

And then based on that understanding, sort of makes a decision or performs a task.

Now, think about driving a car. Right? You are you are driving and you see a red light.

And I, and then you know that you have to stop at the red light.

Mhmm.

But you have a pretty good guess as at when you wanna press the brakes. Yep. And how hard you wanna press the brakes.

But if I ask you to write down the exact mathematical thing going inside your brain that tells you when to press the brake or how hard, would you be able to?

Probably not. It’s it’s like trying to teach my kids to drive, right? Trying to explain what goes on in your brain is really hard. Yeah. Right.

Exactly. Right. And the first time you try to, use the brakes, very first time, years and years and years ago, were you that confident in your ability to stop at a red light? No.

No. Right. So over a period of time, as we keep driving, we keep using brakes more and more, we kind of develop an understanding of how brakes work. We still don’t know.

We still don’t know what’s going on inside here, but we intuitively develop an understanding of what that looks like.

So the models, work in the same way. So the data that we get from the plants, that data helps that model create an understanding, and then you can use that understanding to do various tasks. Got it. Now that doesn’t necessarily mean that you can explain the models in the in a perfect way.

You can’t. You can’t see behind the curtain. It’s kind of a black box. So how it works, you can’t always tell.

Now there are certain models, certain models that are very simple, very they do very simple tasks that you can explain, that you can write a mathematical equation for. But most of the times when we talk about deep learning and complex stuff like facial recognition or FM detection, most of the times that is like a black box. So because it’s a black box, that data becomes very, very, very important because that’s the only way to know whether that model is working well or not. And that’s the only way to teach that black box.

And your model can be as good as your data is. So that is sort of how your data trains your model and helps you understand what that model does.

Training AI Models for Detection

Got it. Okay. So and so once that model is trained then and it it kind of knows how the person would behave. So in the case of breaking, it knows when to when and how to stop the car even if we can’t articulate it.

Or in the foreign materials world, it knows, oh, okay. You know what? That that’s a foreign material. That’s not a foreign material.

Right? So then at that point, what does what what happens next? Right? Like, how does that then get validated in a plant?

Or how does that like, how do we know for sure that our models are working in a facility and they’re doing the same job or even maybe, hopefully, a better job than the people? How do we validate that?

Right. So, okay. So let’s think about you mentioned earlier that trying to teach a kid how to drive. Right? So if you wanted to validate how, well your kid is driving, what would you do? You would sort of take a take a test, and you would have certain parameters, in your mind that I wanna see if they can, stay under the speed limit or, if they can take their turns correctly or if they can park correctly. So you have certain parameters and you design your tests, to to to test those parameters.

At the same time, you kind of need that kind of, examples like you decide that you wanna test their their, parking abilities, but then you need to go into details of, okay, what kind of parkings I wanna test? Where would I do that? How would I do that? So you have there’s a lot of designing that that you have to do before you can be sure, whether that model has learned, what it needs to learn. Right?

Designing Effective Tests for AI

So and if we take even a step back before you take that test, you you might even want to teach all of that first explicitly.

So that’s how exactly models work. So if our if we want our models to to detect, let’s say, foreign material, we need to show them first examples of what the product looks like, what the belt looks like, and what the foreign materials look like. If you want to do fat lean analysis, we need to show what fat looks like, what a perfectly lean product looks like. And then once we show them examples, once we are confident that they have learned enough, and you do that by asking them, questions.

You ask the model, what is this? Is it a foreign material? Is it a product? Is it a belt?

And you get their answers. You do mathematical stuff on top of them to understand how how many times was the model correct versus how many times it was incorrect. And I did that, last week and it was, you know, it was performing worse last week, but it’s performing better this week. So that is how you kind of gradually you get to know whether, your model is training in the right direction or not.

And once you feel kind of satisfied with your tests, then you go, Yes, I think my model is ready. I’m now ready for the final final test. And in that final final test, you give the data that the model has not seen before because that’s that’s what it has to work with. So nothing that you have already shown to the model, a new data set that’s sort of like the data set that you have already shown to the model, but it’s a different, type.

And then you show that to the model, the model works well, that’s when you can tell that, yes, I’ve got a good model.

At PPO, the first thing that we need to take care of is that model has seen all kind of possible variations. Mhmm. Because meat industry is ever changing. So it’s like, today you see fresh product, tomorrow you might see frozen product, and the day after you might see really wet, juicy, sloppy product.

All three products are different. They might look different under different cameras. So, you have to make sure that that when you buy a training, you show all different types of, variations and all different types of foreign materials and everything to the model so they can create a deep understanding of what it looks like. And that is why in the beginning of a project, we sit sit with the customer and we do this discovery process of understanding their environment, their plan, products, everything.

Based on that, we collect data, we design our dataset, we collect the data, label them, and then we train the model using this diverse dataset.

So, Keisha, if I’m understanding things correctly, it sounds like there’s almost like three phases in the model development process. And what I’m hearing is that that first phase where you have to really expose the models to all the possibilities or gather all the possibilities of all the variance of what normal would look like, that that’s a really important phase. And then from there, we take the models and actually teach them. And what like you said before, whether we can articulate it or not, it’s we have to teach them how the the the normal looks and then what’s not normal. And then we have to test those models is what I think you’re saying. So there’s like a a learning or a training phase with all the data, and then there’s a phase of teaching and and education and testing, and then there’s a final phase of confirmation. Is that is that kind of a fair assumption or fair assessment of of how we do it?

Yes. I think I think that’s correct. Yes.

Okay. And so do it. So it sounds really complicated. And if I’m a customer, because, you know, I’m not the technical person here at PPO.

Customer Considerations for AI Implementation

So if I’m a customer, what do I need to know about that whole process? Right? Those three phases and it it sounds really complex. What do I need to know to make sure that my plant is successful with detection of foreign materials using artificial intelligence?

Right.

So first thing is it is very important to understand that the model can be as good as the data is. So if the data is missing something, that model will also miss the understanding of that part.

So that is that defines how you would talk, with a customer about their needs.

So if let’s say, they wanna do foreign material detections. Right? So in order to get a realistic understanding of their environment, we need to look at the different variations in their products and foreign materials that’s given. But apart from that, other external factors like, what is the temperature of the facility? What is the humidity of the facility?

Is that going to change? Does it change how much? What is the sometimes we do care about the temperature of the product itself.

Is it going to be frozen? It’s going to be not frozen? Are we going to find some things that could be foreign material, but they don’t want the customers doesn’t want them to be foreign materials? For example, there could be, ink in their in their product. Ink is not product. Ink is not meat.

But if the customer doesn’t want us to reject on that ink, we need to know that in advance so that we can teach the model that don’t reject on ink. So things like that. Sometimes even bone can be a foreign material. It’s it’s in this gray zone. So so clearing out all those gray zones, is sort of vital, to define what data we need to collect. And because it’s the beginning part, like you said, the first part, if that part is not, perfect, if that part has issues, it’s later gonna be an issue in all other parts. And then we’ll have to stop from the beginning again if that becomes an issue.

 

The Importance of Comprehensive Training Data

So so if I was gonna go back to that driving analogy, it sounds to me like it’s kind of like teaching your kid all the skills they need to learn to drive, but only doing it in one neighborhood and only doing it on sunny days during the day, and then expecting that same kid to be able to drive in a different neighborhood at night in the rain and not having prepared them for that. So you need to understand all the contexts in which that model will operate to really set it up to be successful. Is that did I get that right? Right.

Right. Right.

Okay. Awesome.

So one last question for you, Kesha. The food industry is evolving really quickly. And and what kind of innovations do you see in AI and machine learning that you think are gonna be most impactful for the meat industry in the next few years?

Right.

Future Innovations in AI for Meat Processing

So some of the things that I’m really excited about is the advancement in AI hardware and, AI software. What I mean by that is, like, GPU’s GPUs are becoming faster and faster, and they’re more accessible too. So, like, five years ago, if you wanted to do something in real time, with AI, you were limited to very simple models that could do very simple tasks, which is underwhelming because you can’t do multiple things at the same time. But now, because the GPUs are so fast, powerful, you can do multiple things at the same time. So you can do foreign material detection, and you can do fat lean analysis if you want it. And you can take a you can you can keep an eye on your throughput.

Maybe those small modules can talk to each other, you know, and maybe they could come up with an overall idea of this one quality measure based on these different modules, you know. So I think just just that that advancement in in GPU side and the hardware side is kind of, enabling a lot of options here and we can use very complex architectures and do complex stuff. And I think that’s what I’m really excited about and it’s just gonna get better. So it’s a really good plus plus point for us.

And the other other part is I feel like we will get to a point where AI will train AI models.

So currently once we get data, at PPO, we, machine learning people, have to do a lot of stuff to train those models, and it requires human intervention at different points, which slows the process down a little bit.

But if that is all automated, that the data the data is here, now you go, and then two days later you have the models ready. If that happens, then that’s gonna that’s gonna be huge because the models are gonna reach the customer, right away. At the same time, it’s gonna free us up to do a lot of the r and d work. Like we can focus more on enhancing our models, maybe get better accuracy or, like, detect at a smaller size or, just so many other options, you know? So I think that’s what I’m really excited about.

Sounds like we’re getting more and more tools in our toolbox all the time to make AI even more powerful every day.

Yeah.

That’s amazing.

That’s that’s correct.

That’s awesome. I’m really excited to see what PPO gets to do next with all those really cool new tools.

Same here. Yeah.

Conclusion and Next Steps

Well, thank you, Kesha, so much for your time this morning. Don’t forget to check out the other videos in our Behind the Scenes series, including our related interview with James Spere, the head of software and data, where he pulls back the curtain on artificial intelligence, machine learning, and automation.

Check out the link to that video in the description below. See you next time.

 

Let's Work Together

PPO is ready to partner with you to deliver safer, higher quality food to your customers.

  • This field is for validation purposes and should be left unchanged.