Vision Podcast #010 – Integrating AI and Robotics w/ Bright IA & Aether Advanced Tech

Josh talks with a Texan SI partnership bringing cutting edge AI/ML-based vision to manufacturing automation. Davide Pascucci of Bright IA and Eric Smith of Aether Advanced Technology talk about the role of AI in enhancing traditional defect detection as well as its limitations, how to spot a phony, and the need for incremental approaches and collaboration in problem-solving.

Episode Sponsor

Tired of overpaying for machine vision components? Meet Airon. They deliver high-end laser triangulation sensors, precision lenses, lighting solutions, industrial computers, and more at a smarter price. As systems integrators with 15 years of experience, they sell only what they trust in their own projects. Stop paying more. Shop at aoi-airon.com. Shipped globally today.

Click on the arrow to discover and access direct sections of the transcript:

1️⃣ Guest Introduction
Eric and David introduce themselves, with Eric sharing his background in developing vision systems for the Navy and later transitioning to robotics and AI. David provides insight into his experience with automation and robotics integration, setting the stage for a deep dive into machine vision and AI.

2️⃣ Memorable Machine Vision Projects & Real World Applications
David and Eric start by discussing some of their favorite machine vision projects, including a robotic welding inspection system and how machine vision is increasingly applied to various industries. They highlight the challenges and achievements of integrating vision with robotics.

3️⃣ How Bright IA & Ether Joined Forces
Eric explains how the partnership between Bright IA and Ether began. The two companies merged expertise—custom AI vision software and robotics integration—to solve problems off-the-shelf systems couldn’t. Their collaboration enables smarter, tailored automation solutions.

4️⃣ Traditional Vision vs. AI Vision
Eric and David dive into the evolution of machine vision, comparing traditional vision systems with AI-powered approaches. They explain how traditional systems, while effective for well-defined defects, often struggle with more complex tasks. AI-powered systems, on the other hand, excel in tasks like image classification and object detection, offering more flexibility and adaptability. Eric discusses the shift from traditional rule-based systems to AI-powered vision using convolutional neural networks (CNNs). David emphasizes that solid camera and lighting setup is still essential; AI enhances these foundations but doesn’t replace them.

5️⃣ Synthetic Data, Domain Gaps & Adaptation
The conversation turns to training AI models with synthetic data. Eric explains how synthetic datasets, once unreliable, now benefit from better domain adaptation and diversity—closing the performance gap with real-world data.

6️⃣ When to Use AI — and When Not To
David warns against “AI hype” and urges engineers to match technology to the problem. AI is great for high-mix, low-volume scenarios—but overkill for static, repeatable tasks. Both guests stress the importance of ROI, education, and collaborative problem-solving.

7️⃣ Future Outlook: Foundation Models, Hardware & Events
Eric and David wrap up by discussing the future of machine vision, with a focus on AI and robotics. They explore how AI will continue to evolve, how hardware improvements will enable more efficient systems, and the exciting potential for these technologies in industrial automation.

Episode Transcript

[00:00:04.720] – Eric Smith, Aether Advanced Technology

“If you’re not able to break your problem into subtasks, you’re not going to get anywhere”.

[00:00:12.840] – Josh Eastburn, Host

Welcome to the MV Pro podcast. If this is your first time listening, this is the place where we talk about the latest in machine vision and image processing, specifically with and for machine vision professionals. Lately, we’ve been trying to broaden the conversation around the use of AI in machine vision by giving a voice to the engineers and researchers who work with it every day on the front lines of industry. Today, we talk with Davide Pascucci, CEO of Bright Intelligent Automation, and Eric Smith, CEO of Aether Advanced Technology , about how they work together to bring cutting-edge vision to robotics and automation. With almost 20 years of experience in the automation industry, Davide began his career in Europe, working for significant EPC contractors in oil and gas. His interest in implementation skyrocketed with his arrival to the USA, where his experience expanded into other verticals, including manufacturing, process design, OEM, and water-wastewater. His company, Bright IA, focuses on delivering advanced automation solutions tailored for discrete manufacturing, OEM systems, and process industries with technical expertise in PLC, HMI and SCADA development, MES architecture, and industrial networking, among other areas. Bright IA takes a modular, scalable, and vendor agnostic approach to ensure long term reliability and performance of the solutions it provides.

[00:01:24.580] – Josh Eastburn

Prior to launching Aether, Eric Smith led development of robotic vision systems at a Toyota owned R&D group, where he engineered deep learning models using synthetic data, 3D perception, and real-time inference. Eric has a master’s degree in computer science from Southern Methodist University and was a senior AI intern at IBM Watson. Aether advanced tech specializes in AI solutions. With expertise in industrial automation, Eric’s team has successfully developed advanced robotic vision systems, object detection models, and real-time AI applications across logistics, manufacturing, and beyond.

[00:01:58.700] – Josh Eastburn, Host

Today’s episode is sponsored by Airon. Tired of overpaying for machine vision components? Meet Airon. They deliver high-end laser triangulation sensors, precision lenses, lighting solutions, industrial computers, and more at a smarter price. As systems integrators with 15 years of experience, they sell only what they trust in their own projects. Stop paying more. Shop at aoi-airon.com. Shipped globally today.

[00:02:23.980] – Josh Eastburn

The first question that I wanted to start with for you, I know you’ve been in automation for over 20 years. I would love to hear about your favorite machine vision projects. Any industry, any location, what has stayed with you over all that time?

[00:02:39.020] – Davide Pascucci, Bright AI

The favorite one, I would say, is the one that we have done last year. It was inspecting welding tabs with a robot, so it’s combined machine vision with a robot arm. It was like an in-line application. It was difficult to do it, but definitely the outcome was pretty good, especially to see the robot scanning the different spots, the areas, and turning on the lights. We put blue lights illuminating the area, taking pictures. Then basically, if the machine vision system was detecting a failure, the weld wasn’t there, so it was going back home and then throw an alarm and the operator could come over and figure out. If everything is good, then he will push down downstream the parts, and then there is a palletizer robot that picks up and do different things downstream. But that definitely was the coolest one, I would say. In these last two years, the evolution of this application has just ramped up really good to where we were able to see demo applications from big beam picking that now is extremely popular, but at the beginning was, Okay, that’s the new thing. To where now we’re going toward applications, to sending applications with machine vision, to painting, to welding, to be able to basically teach quickly the paths.

[00:04:10.700] – Davide  Pascucci

I’m staying in the robotic realm. That’s where we play and we operate, a lot. That’s why I’m mentioning all of this because those are the hot topics, at least for me moving forward, that I want to discover more on what is possible with the aid of vision systems and AI.

[00:04:29.060] – Eric Smith

Yeah, that makes sense. I was at automate in Detroit this year, and it was great to see how much machine vision was a part of the show. But also it seemed like it was integrated into so many different things. Even the people who, you know, robotics people, were talking about how vision and AI algorithms were incorporated into their robotics. Any other little subdomain also, there was a lot of, I guess, cross-pollination between those technologies. Yeah.

[00:04:56.500] – Davide  Pascucci

Yeah, there is definitely. It’s a crucial part. Of course, you do traditional way and you don’t have eyes, so you’re blind and you have to, basically, base your assumption on position. The more precise you are on your fixture, jigs, and things like that, the more your application is successful. Now, we are switching gear with vision systems. That’s why they match very well each other. They marry for that reason because you have eyes on your application, so it allows you for errors and adjustment and to become your application are becoming more dynamic than based on position, on Cartesian position.

[00:05:39.080] – Josh Eastburn

Yeah, that was a big theme that I picked up from the show also. Eric, I wanted to switch to you for a second because I’m interested in how your two companies came together. How did you and Davide get connected? What’s the relationship between Aether and Bright IA right now?

[00:05:53.480] – Eric Smith

Yeah, we met through just old-fashioned networking. A friend of mine who works at a large automation integrator connected us. Bright IA does robotic integration. I was starting up Aether Advanced Tech as sort of a computer vision software solutions company, and it was just the perfect partnership from the start. We went to Bright IA and said, Hey, we can develop customized computer vision solutions for you and your customers. And he loved that idea. I think in the past, he had had some issues with the off-the-shelf solutions, in terms of both hardware and software. And so when it came to creating custom applications, it was the perfect partnership from the start.

[00:06:36.800] – Josh Eastburn

Is there a particular application that you’ve done together that’s your favorite?

[00:06:41.300] – Eric Smith

Yeah, right now we’re actually working on a demo unit for the automation stars of Texas Expo in December. We’re working on a defect detection application of very small objects. And the Aether side of things, we’re going to be detecting whether or not a small item has some defect. We’re thinking about going with Keurig cups. And on Davide’s side, Bright IA, he’s going to handle the robotic control and motion planning stuff in order to move those high-quality items away from the ones with defects.

[00:07:18.220] – Josh Eastburn

What is it that you enjoy about that particular application? What’s exciting about it?

[00:07:23.340] – Eric Smith

I think it’s just a very difficult problem. Defect detection is a super hot topic in automation right now, and it honestly always has been. But defect detection for very small objects like Keurig cups is a bit unexplored. We’re having to really stretch the possibilities of the different depth cameras that we’re using and the algorithms that we’re exploring and come up with something that’s going to work quickly enough, efficiently enough, and have a high level of precision and accuracy. I just enjoy that sort of a technical challenge. It’s something new.

[00:08:03.020] – Josh Eastburn

So speaking of something new, I’d like to get into the technology a little bit here. You have a long history in AI. Aether certainly markets itself as a specialist in AI. To start from a general perspective, what would you say differentiates the current generation of AI-based machine vision technologies from the previous generation, previous types of vision algorithms?

[00:08:29.180] – Eric Smith

Yeah, it’s It’s absolutely the introduction of machine learning into the systems. Traditional computer vision was a long, arduous process. It was very error prone. It often required domain experts. And computer vision with machine learning, specifically Convolutional Neural Networks, CNNs, really solve both those problems. I mean, we were able to generate computer vision systems that learned more quickly and produce results more efficiently at lower cost. Less hand tuning by electrical engineers doing pattern analysis and more offloading those tasks to a neural network, a machine learning algorithm.

[00:09:11.940] – Josh Eastburn

What was the acronym or the term that you threw out there a second ago? Convolutional…?

[00:09:17.360] – Eric Smith

Yeah, Convolutional Neural Networks, which use convolutions, which is a traditional computer vision topic, and applies that to a machine learning model such that those models actually learn which kernels, which filters to apply to an image when producing a prediction. Where is an item in this image? Is it a dog or a cat, et cetera, et cetera?

[00:09:39.680] – Josh Eastburn

For someone who’s not a specialist in AI, what learning curve should they expect to encounter as they start to experiment with this? If they’re hearing this conversation and going, Okay, I don’t know what he’s talking about, and I want to know what he’s talking about, what should they be prepared for? What are some new concepts they’re going to need to pick up?

[00:10:00.000] – Eric Smith

Yeah, I think just understanding the machine learning process is a good first step. Traditional supervised machine learning, as we refer to it, involves collecting data, annotating that data with the specific labels for your task. Like I said, is this a cat or a dog? And then feeding those annotations into a neural network, a convolution neural network, for example, such that it learns to distinguish between objects over time. And that sort of process is kind of tricky for traditional engineers or even data scientists. It can seem a little bit like a black box at times. I just feed this AI some data and it spits out an output. So the more one can do to understand what’s going on underneath the hood of the AI that you’re using, the better off you’ll be.

[00:10:50.920] – Davide  Pascucci

Yeah, that’s where you need to be. But on a higher level for us, to deploy, we need to have those tools pretty much ready to go because when we go and deploy, it needs to be fast. Of course, preparation is key, and that’s our mantra. Preparation is key, so you’re going to test as much as possible. In vision systems, what we tend to do is go actually on site and test the real application because then, yes, you have all the mathematicals, all the computing power, but there is still be a camera. So selecting the right camera, the light or lights, because it depends on where you add. If it’s so dark, maybe you need to add lights, need to add, like I mentioned, blue lights. That’s the hardware aspect, too, that you need to take into consideration, because if you feed bad images, then everything is going to go down south.

[00:11:48.480] – Josh Eastburn

Eric, I’d like to ask you a follow-up question to what Davide just said, because this is the theme that we’ve been exploring for a while, which is the line between traditional machine vision technologies and what we’re calling the AI-based approach. Where do you see that line where there, let’s say, are some machine vision fundamentals, traditional approaches that you feel like are still critical to understand are always going to be important regardless of the newfangled technology that you might be using. For example, I’ve talked to a couple of companies that specialize in synthetic data generation. They’re creating assets that can be used for training. The claims that they’re making are huge improvement in not just the time that’s saved, but also the quality. I’ve also gotten some pushback maybe from competitors who’ve said, Yeah, but how much can you really do with that? Compared to what we understand about the laws of physics and the way cameras and lighting work, we’ve had those for such a long time. Do you have any thoughts on that?

[00:12:58.480] – Eric Smith

Yeah. Synthetic data has come a long way, and it certainly has those advantages that you spoke about, lower cost, faster production. And traditionally, there was that domain gap between synthetic data in the real world. So you would train a model in a simulation, and then you would deploy it at a customer side, and it likely wouldn’t work very well because things just look different between a video game in real life. But I think we’re getting better at that. Domain adaptation, which is the field study within synthetic data of bridging that gap between realism and your video game has come a long way. And we have better tools to improve the diversity and the randomization of synthetic data that’s produced. We’re closing the gap between performance when it comes to training a synthetic model versus a model trained on real images.

[00:13:55.660] – Josh Eastburn

You specialize in AI. Is This is the thing that really requires a specialist, where if I’m like, I want to apply these technologies to my work, when do I need to call you?

[00:14:10.420] – Eric Smith

Yeah, good question. It depends on where you are in the process, which is what you alluded to. I think the biggest hurdle for a lot of people is just getting the right data to train their models. So if for whatever reason you’re able to find an academic data set that works for your needs, that’s great. I think you can, just going back and forth with ChatGPT and using a Google collab, you can train a model yourself pretty quickly. But when it comes to those more specialized, unique solutions, and that’s what we see that a lot of customers have. No one has taken a thousand pictures of their products. That’s where Aether and Bright IA would come in. We can construct synthetic data sets, we can generate real data sets if necessary, and and come up with a solution that’s going to get you from 90% accuracy to 99. 9%.

[00:15:07.360] – Josh Eastburn, Host

Today’s episode is sponsored by Airon. Tired of overpaying for machine vision components? Meet Airon. They deliver high-end laser triangulation sensors, precision lenses, lighting solutions, industrial computers, and more at a smarter price. As systems integrators with 15 years of experience, they sell only what they trust in their own projects. Stop paying more. Shop at aoi-airon.com. Shipped globally today.

[00:15:32.400] – Josh Eastburn

Okay, interesting. That’s certainly a theme that has come up in conversations that I’ve had about where these AI technologies are being applied in terms of reaching feasibility of defect detection, whatever it is that you’re trying to solve with vision. Running with that theme, how does that change the way that you approach problem solving as you sit down with your different clients in the, let’s say, the type or the scope of problems that you’re able to solve and the way that you think the front-end design process? How does that look different from maybe the traditional approach or what you’ve done in the past?

[00:16:07.880] – Davide  Pascucci

Well, yeah, I can take it. Definitely having the tools in your pocket or in your arsenal, you can extend the conversation. But yeah, I mean, definitely you will switch gears to what is possible. Before, like going back to the traditional way or even using traditional camera systems, you can do that much and you can really reach the accuracy that you need, especially if you have variation. The word is, for me, it’s dynamic. We are able right now and moving forward to have dynamic applications, which that means that they will adapt to what the process is doing. That’s the key, basically. You can have those type of conversation. However, the  boundaries are like, okay, you want to keep traditional and it has a cost. Then if you go dynamic, now it’s got a much higher cost. But maybe moving forward, not maybe, surely, we will balance this cost. That’s what I see moving forward. We are going to come to more on a standardization of these software applications, but maybe using the same camera, like hardware.

[00:17:32.620] – Josh Eastburn

Ok. Why are clients asking for this?

[00:17:36.680] – Davide Pascucci

Well, I mean, you really first before you had this conversation, you need to first figure out the application. You’re not going to want to throw in the air  like, Hey, I got this AI tool. I got this. This is super cool. The thing is, be honest and then don’t overkill the application. If it’s still a stupid, same product all day long, where if you don’t change With my input upstream, I’m good. I can really dial down to have a great success. Now, if you tell me, Okay, high mix, low volumes, that’s where it is perfect. Today, I went to visit a place like that. High mix, low volume. How are you going to feel? Yeah, vision system could be great there. That’s where you can start the conversation. Or the compromise, they say, Hey, I need to increase my productivity. However, I don’t have enough numbers. Then at this point, what I’ve seen, it’s compromising some stuff. If they want to to lower the cost or keep the cost down and then say, Okay, we’re going to take care of this because this is the most sold part or material, and then we leave them off. Then later on, we can come back and maybe add some vision system.

[00:19:01.260] – Josh Eastburn

What I’m hearing is not that there’s a push necessarily from your clients, but that you, as the engineer, are looking at the customer’s request, you’re looking at the application, and this is now expanding your toolbox, where you’re saying, Okay, with this set of technologies, traditional stuff, we can handle categories A, B, and C, where we start to get this, you mentioned high mix types of applications. The low volume. So you can look at it and say, Okay, yeah, now I know this is the right technology to bring to that type of application. Am I understanding you right?

[00:19:35.260] – Davide  Pascucci

Yeah, that’s perfectly right. You need to understand also what level of knowledge the customer have. Some are ground zero, and I’m not blaming them. The first step for us is to educate them. People come in, Hey, with this and that, you can do it. Most of the time, it’s not true. First of all, some are, technically speaking, really good, some are not because they hear, Okay, there’s AI, there is this, there is that. So you’re trying to, Okay, yes, let’s navigate together and take the right path to handle this situation. That’s what needs to happen also at this level. Then we can, Okay, what are your capabilities, your budget, and things like that? We can do this, we can do that. At this point, like I said, it changed the cost. Of course, you had more more software capabilities. If it’s, of course, if it’s from static to dynamic application, I like to use the term, of course, it changed. But that’s where we at, the state of the manufacturing as we see with our own eyes. When we go and visit someone more prepared, someone not, but there are this discussion they want to know, so there is a discovering phase now. It’s before was with the robotics, cobots, robots. They’re still puzzling the difference, so you have to guide them. Then now we are adding on top of this AI, that is a big deal across the board.

[00:21:10.560] – Josh Eastburn

You brought up an interesting concept there that I’d like to go a little bit further into: the idea of false claims around AI. Well, obviously, we’re not naming names here, but I’m wondering what are some of… I’d like to debunk this a little bit for our listeners as they’re trying to sort this out in their heads and understand what the limits of the technology are. Eric, you also mentioned domain adaptation, and I wonder… Now, I feel like there’s an aspect of that that applies here also, which is understanding what we can do generally and what requires specific adaptation. So, yeah, if you guys could comment on either of those two ideas.

[00:21:50.260] – Eric Smith

Yeah, sure. I think the biggest misunderstanding when it comes to AI is the word generalization. Ai currently doesn’t see like a human does. Josh, you and I have very complex vision systems behind our eyes. AI in its current state is a lot better at specialized tasks. And so it’s not as easy as just pointing a camera at an object, plugging in an AI, and then spitting out a solution. So I think it’s helpful for customers to break problems into subtasks and really trying to understand what am I asking the computer to do? Is it object detection? Is it segmentation? Is it classification? It’s kickstart with the smaller problems and work your way up to solutions in-chain downstream.

[00:22:43.420] – Davide  Pascucci

Yeah, because you always start with the basic tools, right? You have segmentation, OCR, the one you guys mentioned already. Then from there, how I can attach this model, this computing power to an S to get more perfection from those basic tools that if you are in the industry, everybody knows. You go to whatever software you use, you select segmentation, and then there are rules, and then you start collecting those images. But then now we have the next level. Based on that, and then we can expand the discussion, the utilizing AI.

[00:23:22.260] – Josh Eastburn

I like how you keep revisiting this theme of an incremental approach of introducing this technology as the problem space grows or something like that. Do you feel like there’s a need to, or if you could, warn somebody against? You hear someone comes in making this a claim, walk away. What would you say? Is there anything like that? Is it to that level that you’ve seen?

[00:23:49.340] – Eric Smith

Yeah, I would say anyone walking in thinking that AI is going to solve all of their problems, that’s probably the biggest red flag. Like I said, if you’re You’re not able to break your problem into subtasks, you’re not going to get anywhere. So it’s much easier to identify one, two, or three things that you need a computer to learn how to do. Perhaps try that yourself with your your in-house developer, see how far you get with traditional computer vision or off-the-shelf AI solutions, and then come to us and say, Okay, we’ve tried these things. We’re trying to solve this specific problem. Let’s work on a unique solution that hasn’t been done before.

[00:24:34.400] – Josh Eastburn

Do you want to add anything to that, Davide?                                                                                                                                                                                                                                                                                       ?

[00:24:37.660] – Davide  Pascucci

Well, I completely agree. Then going back to the beginning, first listen to the problems when you go and try to solve a customer problem. Listen and then as a preparation, okay, you’re thinking that we can work together? Let’s run some test. Let’s see what we can do. Bring in some stuff on site as far as visual system, is not invasive and it can be done quickly. But a more simplistic, a more collaborative approach, that’s what I like to have with the customers. Sometimes they can do stuff in-house themselves, and I like it. I’m not the guy that wants to take over everything. Let’s collaborate. They have their skillset most of the time. We have ours and we can merge and build this team for this particular project and tackle and work through. But again, preparation, testing, and see all the different angles that we can approach. That’s, I think, the best approach. Coming in, I’m going to solve everything. I don’t think it’s the best approach. It’s trying to push through a sale and then fail miserably, probably. That’s probably what it is going to be the outcome before. Be cautious because it’s also a new technology.

[00:25:54.520] – Davide Pascucci

If it’s something that, let’s say, you pick a major PLC brand, for example. We’ve been using that for 20 years. I go there, I can say that because we have the experience. I know it’s nothing new in the horizon. This one is new, it can get tricky, it can get complex really quick. We don’t know all the ins and outs, so we need to be cautious. Let’s try. In fact, the trial and error testing, for me, is fundamental before we go and actually do it.

[00:26:23.980] – Josh Eastburn

Which sounds like fundamental engineering professionalism, which I always love here.

[00:26:29.320] – Davide  Pascucci

That doesn’t change. You need to have the same approach. New technology comes in, you need to learn. What is new, you don’t get stuck in the past, but at the same time, the engineering process is not going to change. It can be improved, of course, but the basic, like you said, It’s always like that. If you skip that, then you’re probably screwed.

[00:26:50.260] – Josh Eastburn

Yeah, you got to have problems.

[00:26:52.480] – Davide  Pascucci

From my experience.

[00:26:55.420] – Josh Eastburn

As you two look down the road with where this technology is going, what are a couple of things that you’re excited about that we might see in the next five years as this continues to develop?

[00:27:08.180] – Eric Smith

Yeah, I think on the AI side of things, foundation models are getting a lot better with the introduction of things like vision transformers and segment anything. That’s a fancy way of saying the off-the-shelf solutions are going to get you further and further over time before you need to go that customization route. And that makes my job easier, right? And on the synthetic data side of things, because that’s my favorite project, I think it’s going to get a lot better at scale. Instead of generating these synthetic images on a beefy computer under my desk, we’re going to be producing this stuff in the cloud, and there will be a lot There are platforms for creating and constructing those data sets offline. And then finally, I just mentioned the beefy computer under my desk. I’ll always be excited about bigger and faster GPUs. They are always going to make my job easier. They’re going to make deployment a lot faster and easier. Those improvements will also apply to things like the cameras that we use. Having an AI integrated on your depth camera is going to get you a lot better results over time. On the hardware side of things, I think improvements in GPUs and improvements to depth quality from the cameras we’re using, it’s just going to get cheaper and faster.

[00:28:27.820] – Josh Eastburn

How about on the business front You mentioned you have an event coming up at the end of the day that you two are working on. Tell us more about that.

[00:28:35.960] – Davide  Pascucci

Yeah, this is an event that I organized with a couple of partners last year. It’s called Automation Stars of Texas. Because we saw a need to serve the area we are here, BFW and Texas in general. We have basically all the industries over here from, of course, the most famous automotive, bag, food and beverage, you name it. Fabrication, it’s big. Texas is big, actually. That’s what they’re saying. But it’s big in the manufacturing. You have all the different players and niches. I’ve seen that there was nothing for robotics automation and to evolve the local players, let’s say distributors and OEMs and guys that work for big OEMs. We put it together last year real quick and it came out pretty good. But this year we wanted to do something more stronger, better, to look more like this big show, but to give the cost low for exhibitors. It’s mostly to get together, to show to the end user here locally that they don’t necessarily go to the up North, to find a solution to give them the opportunity, hey, to meet the local guys that, of course, you have the capability to be close by.

[00:29:57.200] – Davide  Pascucci

Then you have access to the same technologies, and we have also the experience. That’s the idea behind to create something here in Texas and have an event for automation and robotics. It’s going to be December9th and 10th. Just here in the South of the airport, so very accessible if you come from, let’s say, California, right? It’s at the Marriott South DFW Hotel.

[00:30:24.080] – Josh Eastburn, Host

So Dallas. It sounds like networking is going to be a big part of the event. Eric, you mentioned you’re working on some special demos for the event. Yes. Any other highlights that people might look forward to, especially in the vision area?

[00:30:39.660] – Eric Smith

Yeah, I think people are mostly going to want to see how well the system performs with the default hardware that we’re using, and we’re not going super fancy. We’re not using a $10,000 depth camera. We’re using things that you might buy on Amazon, at least on the vision side of things. I’m hoping to impress the audience with just how far we can get with your cheap, reliable tools. That way they know that they’re not going to break the bank if they come to us.

[00:31:11.580] – Josh Eastburn

Where can people learn more about you about this event? Is there an event website that they should look forward to at some point?

[00:31:20.260] – Davide  Pascucci

Yes, there is. I can share you the link, and then right there, they can find all the information they need. Then we have a LinkedIn page for the event that we can share later. Then you can sign up right there. It’s for the thing. This is free.

[00:31:36.700] – Josh Eastburn

Cool. So send me those links. Is there anything on this that you guys were excited to talk about?

[00:31:41.500] – Davide  Pascucci

I think we had a pretty good conversation. I like it.

[00:31:44.940] – Eric Smith

That was great. Okay, that was perfect.

[00:31:48.380] – Josh Eastburn, Host

Thank you so much to David and Eric for coming on the show. For links to their sites and to the upcoming Automation stars of Texas event, take a look at the show notes. If you are an active integrator in the machine vision space and have a unique perspective to share with the industry, please reach out to me on LinkedIn or via email at josh.eastburn@mvpromedia.com. Until then, I’m Josh Eastburn for MvPro Media.

Most Read

Related Articles

Sign up to the MVPro Newsletter

Subscribe to the MVPro Newsletter for the latest industry news and insight.

Trending Articles

Latest Issue of MVPro Magazine

MVPro Media
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.