A selection of our interviews from Automate 2025 in Detroit, Michigan, including:
– Alek Shikany, A3 – Ronald Mueller, Vision Markets – Eric Hershberger, Cognex – Matt Puchalski, Bucket Robotics – Frantisek Takac, Photoneo – Steve Kinney, Smart Vision Lights
Click the arrow to easily access sections of the transcript below!
1. Market Insights with Alex Shikany (A3)
- A3’s EVP discusses the massive turnout at Automate 2025 and the remarkable innovations in AI, robotics, and machine vision.
- Focus on 3D vision technologies, humanoid robotics, and collaboration in automation.
2. Ronald Mueller (Vision Markets) – Trends in Robotics and Vision
- Ronald shares his thoughts on the strong integration between machine vision, robotics, and AI technologies.
- Insights on how robotics and machine vision are interdependent, and how AI is being cautiously implemented.
3. Cognex’s Smart Vision with Eric Hershberger
- Eric highlights Cognex’s latest tech, including embedded vision and AI-powered algorithms.
- Deep dive into edge learning OCR tools, high-speed cameras, and vision validation techniques.
4. Defect Detection Innovation with Matt Puchalski (Bucket Robotics)
- Matt introduces Bucket Robotics and its unique approach to defect detection using CAD-based vision systems.
- Insights into the challenges of aligning defect definitions in manufacturing and the advantages of digital solutions in defect detection.
5. Revolutionizing 3D Vision with Frantisek Takac (Photoneo)
- Frantisek discusses Photoneo’s breakthrough in multi-view 3D localization and bin picking technologies.
- How Photoneo’s innovations in 3D scanning and robot guidance are changing manufacturing, especially in automotive and logistics.
6. Machine Vision Lighting with Steve Kinney (Smart Vision Lights)
- Insights into the emerging applications for machine vision, including agricultural automation and the role of custom lighting solutions.
- Steve explains the importance of lighting in machine vision applications, especially in industries like agriculture and paper production.
Episode transcript:
1. Market Insights with Alex Shikany (A3)
Josh Eastburn
Hello and welcome to the MV Pro podcast. This month, MV Pro Media President Alex Sullivan and I put boots on the ground at Automate 2025 in Detroit, Michigan. As you’ll hear, this year had a big turnout, plenty to report on, especially when it comes to robotics and AI. But we were there laser-focused on highlighting only the best in Machine Vision. Before I go any further, thank you to everyone who invited us to stop by for an interview. We came back with a lot of great content, which I diligently sorted through to present to you today. Let’s get into it. To start things off, let’s hear a couple of perspectives on the market. We sat down first with Executive Vice President of A3, the Association for Advancing Automation, Alex Shikany.
[00:00:45.680] – Alex Shikany (A3)
This is a long time coming. This is a yearly activity for us, a lot of lead up and prep, but day one was phenomenal. This is going to be the biggest Automate we’ve ever done. It’s going to be over 40,000 people, 875 plus companies, 340,000 plus square feet. The amount of innovation that you see out here, vision pretty much in every booth, AI on pretty much every marketing material and robot that you see, humanoid robots walking around. It’s incredible the pace of change that we’re seeing.
[00:01:15.420] – Alex Sullivan
Are there any particular applications, products, areas that you’re particularly excited about to be seeing at this time of the show?
[00:01:22.500] – Alex Shikany
I’m excited all the time about AI and the conversation around humanoid robotics. But I started my career when I first joined A3, really focusing a lot on vision technology. So seeing the leaps and bounds, more of the 3D vision technologies that are out there and more of them being deployed, how artificial intelligence is interfacing now with vision on the software side to really streamline the deployment and the amount of applications you can do with vision technology. I would say humanoids and the advances in vision, but you see a ton of AMR companies out there.
[00:01:56.840]
AMRs are everywhere moving around. Humanoids, like I mentioned, and then collaboration. Humans working alongside these technologies. That is a theme that runs throughout this show floor. You’re going to be able to walk up to robots, touch them, move them, program them. There’s one right behind us. You can program it to put a golf ball down the waynd, and that was cool, I actually did that! Yeah, it’s very interactive, and it’s collaborative. It’s not walled off. The technology is at a distance’s, it’s approachable. But our kids, when they get to working age, it’s going to be innate. That being around automation and comfortable with it is going to ingrained like iPads are this generation.
2. Ronald Mueller (Vision Markets) – Trends in Robotics and Vision
[00:02:34.060] – Josh Eastburn
I was also fortunate enough to bump into our very own contributing editor, Ronald Mueller of Vision Markets, who you heard from back in Episode 2. Here’s what he had to say about what was on show this year.
[00:02:45.200] – Ronald Mueller (Vision Markets)
It is day one for me, and I have not even managed to cover half of a hall because it’s so busy and it’s so many vision companies. When you’re just talking to everyone for just a few minutes, the time goes by. It’s always impressive to see how much machine vision plays a role in this automation show. That’s impressive over and over again. I’m confident that when I’m walking further over the next two days, then I will be discovering even more companies and challenges and applications which I have not witnessed before.
[00:03:17.120] – Josh Eastburn
Alex and I were remarking on how the overlap between robotics and vision and AI seems to be really strong.
[00:03:24.770] – Ronald Mueller
That’s right. No, I can really confirm. When you see vision and cameras on the application level, it’s almost always in some way or the other connected to robots. You have the 3D cameras, you have the 2D cameras with hand-eye calibration, you have robots who are using cameras basically to navigate certain inspection points and facilitate an appropriate inspection from that. It’s manifold. As you say, robotics and vision are very much thought together in the applications.
[00:03:57.420] – Josh Eastburn
Is this all going to become the same technology in the future? It’s all going to be integrated?
[00:04:00.900] – Ronald Mueller
Yeah. The only thing I have not bumped into so much yet is AI-based applications. As we’ve been talking on other occasions, people are getting more cautious about that, and others are getting more you know… Once we have solved it and are absolutely happy with it and fond of it, but it’s more selective on where AI is actually implemented and applied and where you are sticking with traditional machine vision technologies and what it takes really to be succeeding with an AI approach. That’s underestimated still, and it has been underestimated. People are getting now more realistic. And accordingly, it’s less uncertainty in the development phase of systems. For some systems, you’ll clearly see, Okay, this shall be solved with AI. For others, you are fairly clear from the beginning, that’s no. Let’s stick to the standard way until we understand it better.
[00:04:55.340] – Josh Eastburn
Yeah, that makes sense.
3. Cognex’s Smart Vision with Eric Hershberger
[00:04:56.500] – Josh Eastburn
As both Alex and Ron mentioned, many vendors were showing off the latest Embedded Vision Algorithms. But Eric Herschberger, Principal Applications Engineer at Cognex, not only gave us a great tour of that company’s latest smart tech, but he also helped us think about the interdependent roles of AI algorithms and traditional machine vision fundamentals.
[00:05:21.340] – Eric Hershberger (Cognex Corporation)
Every year, I say I’m not going to build their demos, but I build their demos every year, so that’s all it is. My favorite demo, though, is right here. We have a handheld A700 barcode reader. Essentially, what I’m going to do is read your badge. We have a PLC that then sends your name from the badge to our laser. We’re using an LMT Dominator. Marks the pen with your name on it. Then we have our brand new camera. This is our 8902. It’s a new camera. It just came out in our micro-series. It’s really small. It comes in all kinds of resolutions, but it automatically then triggers and reads the text to make sure it was marked correctly.
[00:06:00.740] – Josh Eastburn
That was incredibly fast, by the way. For those who didn’t actually see that, took a bit of a second.
[00:06:04.100] – Eric Hershberger
Our Edge Learning OCR tool, it reads to verify that it was marked correctly. Very standard application. I just love this demo. They have 130 pounds of pens that I purchased for this.
[00:06:14.390] – Alex Sullivan
That is a very impressive piece of kit indeed.
[00:06:17.020] – Eric Hershberger
Super easy integration, right? There’s nothing really complicated here, but I just love how simple it is to share the data to the laser to then take a picture and verify that everything is inside. So, that’s the newest one. Our other newest camera in the same series is right over here. This is the 89 C, so this is our 12-megapixel color imager. It’s in the same form factor. It’s just an update to our microseries cameras. The beautiful thing I love about it is, it’s a really large, high-resolution image in a very small package. We have a couple of options to power it also. Originally, this series cameras was power of Ethernet only. Now it’s 24 volts or power of Ethernet. We have a lot of automotive customers that don’t like power Ethernet. You could just throw 24 volts to it and then still communicate with your camera really well. It’s running all of our new Edge Learning tools, which Edge learning is a subset of deep learning. Deep Learning, it requires a GPU, a bunch of images to train it, and it’s fantastic. I absolutely love deep learning. I use it all the time, and I always make the joke it’s taking my job.
[00:07:22.740]
I have applications that would take me months to program. As a Principal, I get the really complicated applications, and now I can do that in a couple of hours with deep learning. I love it, which gives me more opportunity to do more interesting applications, which I really enjoy, too.
[00:07:36.560] – Alex Sullivan
Tell us, give us an example of some of the more interesting applications that you’re involved with.
[00:07:41.060] – Eric Hershberger
The best one was an OCR and an engine block. It’s eight characters etched into the sand core. It took me two months to originally program, and then now it took about a half hour to program it. The really neat thing here, though, too, this is a new software feature that we just came out with a couple of weeks ago. This is a project I’ve been working on for six months. It’s called vision validation. I’ve programmed this application. I’m looking for a present absence of bolts, the paint marks on the engine head, making sure all the cams are in the right orientation. What happens if somebody comes in, we’re not making production, something’s failing. You can come in and you can make a change to the program. Sometimes nothing ever changes on a production floor. It’s always the same. But how do you then know if you make a change that your camera is actually passing good parts and failing the bad parts. The vision validation is something like, if I make a change to this program, the system is going to automatically flag it and say, Hey, something has changed. How do we know that we’re making good parts or bad parts?
[00:08:43.540]
So vision validation, I have a set of images stored on the camera. There are 10 good, 10 bad. Then what we can do is we can run it and we can test to see if that change still passes good parts and fails bad parts correctly. If I recorded it, the program knows exactly what we should expect from each of those images. You can see here in a second, it’s invalid. That change means that we’re actually now making bad parts as we move forward. That was a bad thing. Unless you like to make bad parts. Getting the recalls, we’re going to contain this. No big deal. Really super easy way. Now I know that was a terrible change. Run the validation again. Now it’s going to tell me that we have a valid program and we’re back to making good parts. You’re thinking FDA validations. Any changes, you have to revalidate your entire line. But how do you know still over time that those parts are that you’re making a good plan?
[00:09:38.230] – Alex Sullivan
It’s come up on the screen there again within just a few seconds, just to say that the job validation result is valid. So yeah, it’s very simple, easy to use.
[00:09:45.460] – Eric Hershberger
Data that can be passed back to your MES system, your ERP, your PLC. It’s important to know. The change was made. We question all of our parts after that.
[00:09:55.010] – Josh Eastburn
That model, you said, is also stored globally.
[00:09:57.540] – Eric Hershberger
Yeah, it’s in the camera. All of our Insight Vision Suite cameras have that capability, which is a firm we’re operating now.
[00:10:03.210] – Alex Sullivan
Really simple to integrate for production directors, production managers, things like that. They can come to you.
[00:10:07.280] – Eric Hershberger
Huge time saving. One more fun one. We got a second? This is one of my favorite demos. It’s an oldie, but a goodie. It’s this one right over here.
[00:10:16.620] – Josh Eastburn
I was noticing this one earlier.
[00:10:18.010] – Eric Hershberger
This is our 3801 Insight Vision Suite high-speed camera. It’s 225 frames a second for imaging, and it’s looking at vials right now. We’re looking for caps. If the caps are good cap, bad cap, and making sure the label on the vials are correctly positioned. It’s always esthetics. We want to make sure. But caps are important. Make sure they’re sealed. Expiration dates, things like that. But I’m also trying to be really sneaky here and teach you some basics in machine vision. This 3801 has our high-speed liquid lens, but also our new torch light. It’s got red, blue, green, white, IR and UV lights built into it. I’m changing the light color every 100 inspections, and you can see the cap starts to change color. It’s a blue cap. With the red light, you’re going to see that the blue cap looks really dark to the camera. It’s a black and white camera because the red light is absorbed by the blue and doesn’t reflect back at us. But you’re going to see in a second here that when the blue light turns on, the cap now turns white because the blue is reflecting back, the camera sees it.
[00:11:23.350]
You see a lot of companies say that with deep learning tools, you don’t need to have as good an image formation. You don’t really need to have as great a lighting or as high-resolution cameras, which deep learning could do a lot. But if you have the basic set up and have a really nice image, programming just makes it so much easier. So much less effort, so much less anything.
4. Defect Detection Innovation with Matt Puchalski (Bucket Robotics)
[00:11:45.000] – Josh Eastburn
Not to be outdone by the incumbents, however, this year’s show had a great turnout from a number of startups in the automation space as well. We got a chance to talk with new Y Combinator graduate Matt Puchalski about his company, Bucket Robotics, which is hoping to make waves in the space of synthetic defect detection.
[00:12:04.620] – Alex Sullivan
Matt, why don’t you introduce yourself? Tell us a little bit about you and Bucket Robotics, and then perhaps we can talk about some of the products and some of the application areas that you’re offering here this week.
[00:12:12.880] – Matt Puchalski (Bucket Robotics)
Excellent. Cool. Yeah. Thanks, Alex. I am Matt Puchalski. I am the CEO here at Bucket Robotics, a startup that is building vision systems for manufacturing, specifically focusing on defect detection. Our really exciting pitch is that we build our vision systems based off of CAD (Computer-Aided Design) files of components.
[00:12:38.050] – Josh Eastburn
Say more. I’m so interested in how that plays out.
[00:12:41.690] – Matt Puchalski
Cool. Yeah. So one of the big things that we are solving is a universal problem in quality inspection, which is trying to get alignment on what a defect looks like and the perpetual problem of collecting new data, labeling new data, qualifying new data, and then hearing from your customer, Hey, we’re changing the material, or we’re changing the line, or we’re changing what we’ve decided is a defect. So congratulations. I hope you have a team that can work through the weekend to catch all these new criteria that we have. And so the way that we handle that is you have a CAD file of your component that you actually care about, and we provide you a really easy way to dial in those defects on a simulated data set. So that’s what I’m pointing. But for your audio listeners, yeah, you can’t. I will describe what we’re seeing.
[00:13:40.747] – Josh Eastburn
Yeah, talk us through it.
[00:13:41.620] – Matt Puchalski
So the idea behind it is you upload a CAD file of your component. Then we have a really intuitive way of answering all of those questions that an SI (System Integrator) would ask you. What kind ofbackground do you have? What is the part? Material? What’s your color way? All of these things that are meetings and meetings, you can just click through incredibly easily. Then dial in your defects. I care a whole lot about scratches on my one face and warp on the other side. These are different data sets. These are different camera systems. Systems, and that’s crap, to be honest. We support aluminum, steel, and plastic materials for the product that we’re launching right now.
[00:14:24.940] – Josh Eastburn
What’s that product called?
[00:14:25.730] – Matt Puchalski
Our product for the data set itself is called Wrench ML.
[00:14:30.020]
Yes, we are giving this defect generation system to our customers for the very first time. For our first launches of defect detection based on CAD, we would fly to a facility and then we would be the ones who were reading through quality manuals. We would ask our customers, Hey, how do you onboard a new quality inspector? What’s the binder look like? And then we were tuning those knobs ourselves. And then we realized, Wait, no. What people really want to know in a black box like computer vision is they want to trust what that actual data set looks like. So if I can give you the tool that says, Oh, yeah, okay, I know exactly. This matches my camera. This matches my definition of scratches and bumps and defects, then I will trust your products, and I can move a lot faster. I’m not waiting on an email for some startup dork who’s hanging out at a different facility.
[00:15:26.090] – Alex Sullivan
Are there any specific or particular markets you think this really has a key advantage in or is a bit of a game-changing piece of kit that you think this will help with?
[00:15:34.870] – Matt Puchalski
Yeah. We have incredible interest from three products lines. The first is injection molding for Class A surfaces in automotive. It’s this perpetual problem of, I care a whole bunch about my defects because this is going into a very expensive vehicle, up to the point where, Oh, no, there’s a small enough defect, that’s waste, that’s carnage, right? How do you to actually dial these kinds of things in. Then the color cosmetics industry, because we’re able to synthetically generate the data set that lowers the barrier for entry if you are changing what your product coloration or your packaging looks like. All of these are different ways that you’re constantly rechurning, but if you’re doing this digitally, it’s significantly easier. Then the third is defense, where you have a cold start problem of you don’t have a high amount of volume around your titanium, whatever, and you can’t afford a bunch of scrap, but you know what they’ll probably look like. How can we dial in that?
[00:16:37.910] – Josh Eastburn
You came through a Y Combinator?
[00:16:39.650] – Matt Puchalski
Yeah. We’ve been doing this for 11 months now. We’re coming up on the anniversary of going through Y Combinator Startup Accelerator. In the summer of 2024, we raised our first seed round in October, and we’ve been growing the team since then. We have offices in Pittsburgh, Pennsylvania, and in San Francisco, and we are hiring.
[00:16:58.620] – Josh Eastburn
Very exciting.
5. Revolutionizing 3D Vision with Frantisek Takac (Photoneo)
[00:16:59.910] – Josh Eastburn
But possibly my favorite interview of the show pushed the frontiers of smart cameras, 3D vision, and deep learning, not to mention robotics. Slovakian company, Photoneo, was recently acquired by Zebra Technologies on the strength of their multi-view 3D localization technology. Global strategic partnerships manager, Frantisek Takac, told me all about it.
[00:17:24.700]
I sat in on your talk on 3D vision for multi-view bin picking, and I I like the overview that you gave of some of the different approaches. What’s the traditional approach?
[00:17:35.330] – Frantisek Takac (Photoneo)
When it comes to the free division, the standard approach was a fixed sensor. For example, for a bin picking, fixed sensor mounted over the bin. You were limited to only one angle, so you were able to see the parts. But unfortunately, the more difficult geometries were not visible for the fixed sensor. Practically, it was practically unable to see those surfaces. That would require a completely different approach. More complicated objects, big objects, overlap objects, were unable to pick it. Therefore, we have developed something that we call the multivit localization. The multi-wave localization is using either two static scanners, 3D scanners, that are mounted under the angle, and we are able to use the red and blue laser to scan at the same time, which is drastically decreasing the scanning time. This multiple VeePoin acquisition ensures that each of the points cloud captured by each of the scanner are meshed together in one Union point cloud. The localization is running on this Union point cloud, and we are able to see much more details. We are able to overcome occlusions, shadows, and we are able to navigate the robot more precisely, even for the parts which are, for example, standing alongside the wall, so like L-shaped parts, especially in the automotive industry.
[00:18:59.330] – Josh Eastburn
Flat pieces I imagine.
[00:19:00.560] – Frantisek Takac
Flat pieces, which are in the deep bins in the corners. So practically, the static sensor is unable to see them. With the multivive approach, we have this two scanners approach, and then we have our Motion Cam 3D, which is our proprietary 3D camera with a parallel structured light technology, and we are able to scan on the fly. While the foxy 3D sensor, which is used for the static scene, Motion Cam 3D is used for dynamic scenes. It’s capable to capture the moving objects up to the 40 meters per second. Now we are able and introduced to scan with a blue laser, which is significantly improving the performance, especially on that semitransparent and transparent objects. But also it will give you possibility to scan together with a red laser, so creating a scanning tunnel, which is very interesting, especially for quality inspection and different, let’s say, free division guide robotic tasks. This MotionCamp 3D, the second approach for the multiviv, the bin picking, we place the camera directly on the robot body, so end-of-arm tooling, and we are scanning on the fly. While we are changing the positions of the robot to get a multiple viewpoint perspective, we don’t have to stop the scanning.
[00:20:13.180]
It’s a camera, so we can scan on the fly. Now, in real-time, it’s capturing the object from a multiple perspective, so it’s completing the point cloud on the fly. The localization is running on this completed point cloud from multiple perspectives. It’s like having a multiple pair of eyes watching an object and completed its geometry and 3D model in real-time. That’s something really exclusive for the free-division guided robotics.
[00:20:41.030] – Josh Eastburn
Because all that computation is happening on the camera as it moves around.
[00:20:44.970] – Frantisek Takac
That’s the beauty that the image processing is directly on board. We are using Nvidia technology and GPUs directly on board our devices. The image processing is done directly on board of the device, and we have a passive cooling. Also, we have a carbon body. So, the carbon fiber body, ensuring us very good thermal stability. The motion cam, it’s also equipped with a proprietary CMOS sensor, which we are calling Compass, computational image sensor. This is with a mosaic shutter. The mosaic shutter is very unique. You can find the sensor with the global or rolling shutter, which is a standard for the machine vision. But the mosaic shutter is the shutter that when you can turn on and off each of the individual pixels, so you are able to quantitize the image directly on board, and you are able to do it in very fast manner. You are doing really fast swipes and illuminating with a red or blue laser the targeted object. And, while you are able to do it very fast from 10 milliseconds, you are able to capture really highly dynamic objects and reconstruct the whole scene directly on board the device. Yeah, that’s amazing. The data you are streaming then for free division guided robotics are computized and you are able to immediately use them.
[00:22:05.170]
That’s why we are enabling the real-time robotics. We are enabling truly robots to see and understand the environment they’re acting in.
[00:22:15.680] – Josh Eastburn
What are some examples or what are some applications that you’re targeting with that technology?
[00:22:20.550] – Frantisek Takac
That’s a great question. When it comes to the applications, it’s definitely pick-and-place applications and then picking. See the perspective for multiple or viewpoint acquisition, flat parts on the part like A and B pillars stacked to each other in the automotive industry. So automotive industry is definitely a leader for us when it comes to bin picking. We have hundreds and hundreds of installations, especially in the body shop, but also in the other corners of the automotive and tier one and tier two manufacturing floors. And outside the automotive industry is definitely logistics industry. We are leading the logistics industry with a custom-made solution for parcel picking, de-paletization, mixed-paletization, truck and loading. These solutions are enabled by a true real-time free division.
[00:23:08.380] – Josh Eastburn
Yeah, because everybody’s looking for more flexibility and throughput.
[00:23:11.050]
I loved the example that you finished your talk with of the robotic arm picking and placing parts inside of a moving vehicle. So explain how that comes together using vision-guided robotics.
[00:23:23.610] – Frantisek Takac
Okay. Previously, when they’re trying to do something like a dynamic assembly on an automotive shop floor, especially when we’re talking about final triunum assembly. So the car or the targeted object is nonstop moving on a conveyor. You usually use the operators which are mounting a cockpit. They’re mounting the wheels. They’re doing all of this like a fine assembly on the fly. The sensors that has been previously used for that application was like a multiple sensor. You use the 2D, you use the 3D, you use the tracking, you use the Converter Tracking Modules, encoders, PLCs. It’s a lot of the technologies and effort in order to do something that should be really simple.
[00:24:08.140] – Josh Eastburn
Now, for humans, we can look at things and we can go, There it is. Exactly. But for a machine, you need so many different pieces to make sure that things are in place and aligned. Okay.
[00:24:15.790] – Frantisek Takac
By motion cam technology, we are actually enabling real-term track. What the camera can see and track the object, we are actually able to assemble those components on the fly. The robot can move, the target objects can move, and we able to track the objects and do the gluing, screwing, assembly, you name it. By truly enabling robot to see in movement and in color and really fast, this is a breakthrough in the free vision industry, definitely.
6. Machine Vision Lighting with Steve Kinney (Smart Vision Lights)
[00:24:45.410] – Josh Eastburn
If it seems like I’m leaning a little heavily on the vision algorithms side of the show, a conversation with Steve Kinney of Smart Vision Lights brought us back to that theme of the role of machine vision fundamentals in this rapid rapidly evolving market.
[00:25:01.800] – Steve Kinney (Smart Vision Lights)
Smart Vision Lights is a privately-held lighting company. We do LED lighting for machine vision. This is our focus area. Certainly, though, in North America, we’re the leading LED lighting supplier, and we’re involved. We have lighting for most every machine vision application. Every front thing from our miniature lights to our ring lights to our larger bar lights. Machine vision now is venturing out of the traditional little bread box in line with the conveyor somewhere and venturing out in the real world. So agricultural and other things are what we see as emerging technologies. And we’re able to deal with standard products to serve the majority of the market. But where we’re in new areas, people are coming to us and we have full custom light design available. The little round light you see there is over a million lux light for the paper industry, for example. So it’s IP69K rated to be in paper mills where this pull, wash-down, super hot environment. In fact, it’s hot and muggy in there. They need really bright lights to look down these 40-foot rolls of paper. Machine vision gets in a lot of places nowadays, and it’s not always what I’ll call ideal.
[00:26:14.460] – Alex Sullivan
Yeah, of course.You mentioned agriculture as a growth area. What sort of things are you doing in the agriculture space?
[00:26:18.140] – Steve Kinney
On agricultural, you see a variety of new things. We work with companies that are doing the chemical-less farming and stuff. You have, I guess, Farmwise and those companies that are doing automated weeding with no weed killers or anything. They’re dragging the tractor with, I forgot how big the car is, 20 feet or something behind there with several installations, and they have the garden weasel-looking things. And as they go forward at five plus miles an hour, they’re taking vision, identifying the crop, looking in between, and literally zigzagging these weeders in between there. So they go up cross them in front, pass the plant, cross them in back, and crisscross. And they’re able to do weeding with no chemicals. This is really critical, both for, A) reducing labor, these are menial tasks that take farm labor, which is a big topic nowadays, but also the exposure of the labor to the chemicals, and ultimately the end users. We like them because not only that great new machine vision, but there are blue and green solutions for the Earth.
[00:27:29.470]
People, if you’re doing machine vision, I think a lot of my history in the early years was with the camera companies, JI, Pulnix, Basler. And then I’ve been doing lighting now for 10 years. So I consider myself an imaging specialist at this point. But I used to tell people from the camera side that the camera is always off to an afterthought. They designed the whole thing, they design their whole work cell, and then go, Oh, yeah, we’re going to put a camera in here. Someone go buy a camera and slap down. What I found when I got on the lighting side is it’s wrong. It’s the lighting that’s always the last thing. They get the camera in there and go, I have to fit a light. Then they come to me. Lighting is much about the geometry, what light we put, where the light is coming from, getting the right result on the product. They come to me and say, I have this little space up way back over here to put a light. Can you get something down in there?
[00:28:18.910]
If you do train, we offer in-house training, as well as I’m an instructor at the A3 conferences. You can do Certified Vision professional training, which each those courses with me and a number of instructors, they’re experts in the industry.
[00:28:33.350]
And I have one thing where I show you a Euro coin, but I’ve got it lit six different ways, and it looks totally different. And the thing I try to impress on them is, oh, you think it’s good enough? I’m hearing from some of these lesser experienced AI software inspection companies, but maybe not such history and imaging going, oh, we just teach it. If the lighting changes, we teach the change in the lighting. And that’s something that’s been out for a while now. I look at this and say, Here’s a Euro, it’s a list of six different ways. Depending on what you’re doing, I often ask my classes, Which of these is the best image? They’ll venture some answers. I’ll say, None of you had enough information to go because I didn’t tell you what the inspection was. If I’m doing gaging, I want that back light and just that black circle is exactly what I want to see because to accurately measure those edges, that’s what I need. And any of these front light ones, you can’t have it because there’s light bouncing off the edges, light bends, and it affects your measurement. And then those choices, of course, on dark field and bright field. And depending if I wanted to see the coin itself or the surface of the coin, there’s no one right answer, or I’m just going to teach it. The information is not there for the system if you don’t lie there.
[00:29:46.940] – Josh Eastburn
That’s our wrap on Automate 2025. We hope that gives you a sense of what’s happening in the industry right now. But of course, there was way too much to include. So, keep an eye on our social channels and mvpromediaom,.com where we will be releasing more coverage from automate. For now, say hello to summer, be well. For MVPro Media, I’m Josh Eastburn.