Vision Podcast #11 – Designing Real Time Machine Vision w/ Ed Goffin, Pleora

In this latest podcast episode, Josh has a chat with Ed Goffin from Pleora about its 25-year history in real-time video applications. Ed shares his perspective on the state of the market and forthcoming innovations including the GigE Vision 3.0 standard and RoCE v2.

About our guest

Ed Goffin is VP of Product Marketing at Pleora Technologies, where he leads global strategy for real-time video and data connectivity solutions. With over 25 years in telecom and machine vision, Ed brings deep expertise in simplifying complex technologies for automation, medical, and defense applications. He also serves on the A3 Vision & Imaging Technology Strategy Board and is a frequent speaker and author on real-time vision and industry standards.

Podcast Chapters

Access the different sections of the transcript here:

1) Introduction – Ed Goffin & Pleora’s Story
Josh introduces Ed and Pleora, setting up the conversation around real-time vision, industry leadership, and upcoming innovations.

2) Introduction the the A3 – Association for Advancing Automation
Ed explains his role on A3’s Vision and Imaging Board, how the association supports companies across robotics, AI, and vision, and why industry collaboration is key to advancing automation.

3) Global Trade & Industry Uncertainty
Ed discusses how global tariffs and economic uncertainty have affected decision-making across the machine vision industry, and how Pleora continues to adapt and expand internationally.

4) Evolution of Pleora & Machine Vision Standards
Ed reflects on Pleora’s 25-year history, the company’s founding during the early 2000s tech downturn, and how standardization helped grow the machine vision ecosystem.

5) Technology & Future Roadmap
Deep dive into GigE Vision 3.0, RoCE v2, Thunderbolt integration, and multi-sensor networking. Ed explains how these technologies will shape Pleora’s next-generation products leading into 2026.

6) Looking Ahead – Events, Demos & Where to Connect
Ed shares where Pleora will appear at upcoming events, including defense and vision trade shows, and how listeners can learn more or connect online.

    Episode Transcript

    [00:00:00.080] – Ed Goffin

    One of the things that we consistently get asked for from our customers is solving that process inside. Because as they increase the bandwidth of the imaging device, they’re overwhelming processing.

    [00:00:11.520] – Josh Eastburn, Host

    Welcome to the MV Pro podcast. That was the voice of Ed Goffin. Ed joined Pleora in 2013 and is responsible for product and marketing strategies. Over his 25 year career, he has held corporate communications, marketing and investor relations roles in the telecom and machine vision industries, and has been widely featured as an event speaker and author on topics ranging from packet switching and network timing to real-time vision solutions for automation. Ed is an elected member of the Association for Advancing Automation’s Vision and Imaging Technology Strategy Board, providing strategic direction and industry leadership for the association’s 1,300+ members in the robotics, AI and automation industries. Pleora Technologies is a global leader in real-time video and data connectivity for industrial automation, medical imaging and security and defense applications. With over 25 years of expertise, Pleora delivers solutions that simplify design, accelerate time to market and reduce costs to enable the development of advanced imaging systems worldwide. In today’s interview, Ed and I talk about Pleora’s history up to the latest industry standards and innovations that will be rolling out into 2026. With a focus on designing for real-time applications. Let’s get started.

    [00:01:29.660] – Josh

    So, Ed, I usually like to start these interviews with kind of a personal question, which is, what’s your favorite machine vision application that you’ve been part of?

    [00:01:41.900] – Ed Goffin

    That’s a great question, because I think, I mean, they’re really interesting. I’ve been in the industry for 12 years, and it’s amazing to see how much it’s changed in those 12 years, which shouldn’t be that astounding. I mean, you’re talking about over a decade, right? My favorite application that we’re designed into, I think is probably just because it’s really novel. It’s around local situational awareness in military ground vehicles. So it’s a networking application where we’re integrating all the different video feeds from cameras and sensors on this vehicle. So that the crew within the vehicle get a sense of what’s around them on a single display. And they can switch between different cameras or sensors. It’s just a really novel application. It’s a fun application to introduce people to if they don’t know what machine vision is, or maybe they don’t know what it is, it’s a fairly tangible application where they understand the value for the end user. You’re driving a vehicle and you need to know what is around you in a military situation?

    [00:02:53.600] – Josh

    If you’re talking to somebody at a party, how do you describe what you do?

    [00:02:57.280] – Ed Goffin

    Yeah, and that’s a, it depends on what the party is, right? The example that I use is, what does player do? We connect cameras to processors, which for some people means something, and for a lot of people doesn’t mean anything. But in our world, I can say, well, have you ever had an x-ray? And almost everyone’s had an x-ray, right? Whether it’s medical or dental, and I can say, well, our products are involved in that process. We’re designed into the majority of x-ray panels that are deployed around the world, and we connect that x-ray to the processor so that the physician or dentist or whoever it is on the other end can see the x-rays. And that’s one of the applications we’re designed into. And it has to be real time and it can’t drop fragments of the image. But people then can sort of understand in a tangible way, what do we do?

    [00:03:53.260] – Josh

    And how is it that you ended up working in Machine Vision? What brought you to that industry?

    [00:03:58.140] – Ed Goffin

    Slightly by accident. So, you know, I’ve been, I guess, in high tech for 25 years. Started at Nortel Networks, so I’m based in Ottawa, Canada. If you’re of a certain age, most of us worked at Nortel in some way. It was a huge employer in Ottawa. And then gradually, I went and worked in different telecom companies, smaller companies, larger companies. Pleora came up on my radar just as I knew a couple of people that were here. It looks like interesting technology. I’m not an engineer by background. I’m more on the communication side, but I’ve always enjoyed hardware and software and how you solve problems using technology, novel company, really good reputation with the people that I knew here, and sort of worked my way in the door, I guess, 12 years ago.

    [00:04:53.890] – Josh

    And after all that time, you now sit on A3’s vision and imaging board, is that right?

    [00:05:00.530] – Ed Goffin

    Yeah, so that’s a new position for me. I guess I was elected onto the board. Last year. It’s been a really interesting experience for me because then you get a really broad view on the machine vision industry. Within the A3 boards, I’m on the imaging technology board. There’s us that do interfaces, there’s companies that build cameras, there’s companies involved in lighting, there’s companies involved in the sensor itself, companies involved in developing applications or the machine vision sector. So you get a really interesting opportunity to work with and meet people across the whole industry.

    [00:05:48.010] – Josh

    And I imagine a lot of engineers who are in machine vision don’t necessarily know what the function of A3 is, maybe besides organizing the events that they go to. What’s your role as a board member?

    [00:06:00.360] – Ed Goffin

    Yeah, so it’s a really broad based organization with a number of different strategy focus boards. Off the top of my head, there’s imaging, there’s robotics, there’s AI. I feel like I’m missing a fourth one there. And really we’re there to help the member organizations of A3 represent them to the board, making sure that, like you mentioned, events. So making sure that A3 is holding events that represent the cross-section of different companies that you have within A3. Because again, there you have system integrators, you have application developers, you have large companies, small companies. So as a board, we want to make sure that the programs that A3 is running represents and add value for all of those members.

    [00:06:51.280] – Josh

    There’s a question that we’ve been talking about a little bit on the show here, and I wonder if you might have a take on it. If not, that’s totally fine too. But early in the year, we heard some predictions on the show about how global tariffs would impact the industries that drive a lot of machine vision applications around the world. And as we head into, you know, almost into Q4 here, I wonder, can you say anything about the real impacts that we’ve seen in the industry so far this year?

    [00:07:17.340] – Ed Goffin

    I think from my perspective, the impact has been more It’s created a lot of indecision and slowed down a lot of decision making. In our case, it hasn’t created a financial impact. Our products are still, we’re based in Canada, our products are crossing the border to our American customers. There hasn’t been an issue there, but I think it has just created a lot of indecision, which has slowed down decision making. We just see that There’s a lot of projects going ahead, but they’re moving ahead a little bit slower than they typically would, just because people are unsure about what is going to happen next. That’s tariffs and economy. At the same time, I think we’ve seen sort of a regional focus, like in Europe’s been very active and recognizing maybe some development has to happen in Europe versus development happening in the US. Same in Canada. There’s been a lot of initiatives in the last couple of months from our government around looking at ways to diversify trade. So in our example, you know, there’s countries where we have programs around like looking at Turkey and export to Turkey through Canadian government programs. So it’s been interesting.

    [00:08:38.120] – Ed Goffin

    I mean, the main thing I think has been indecision. It’s just slowed down decision making, which is unfortunate because it felt like, We went through COVID and we went through supply chain issues and it looked like we’re just sort of getting to the end of all of that. And then now we’ve been sort of thrown back into this pause almost.

    [00:09:01.530] – Josh

    Another reason to tap the brakes. Interesting. So you mentioned Pleora maybe hasn’t felt economic effects as much. You feel like you’re moving full steam ahead. And you’re also celebrating the 25th anniversary of the company, right? This year?

    [00:09:17.070] – Ed Goffin

    Yeah. So it’s been an exciting year from that perspective. Like we’ve done a lot of internal events celebrating 25 years.

    [00:09:25.710] – Josh

    Good excuse to celebrate.

    [00:09:27.150] – Ed Goffin

    Yeah. Yeah.

    [00:09:28.110] – Josh

    So I imagine also this isn’t the first time that the company has witnessed an economic shift around the industry. I’m thinking of, you know, what was happening back in 2000 in tech. So maybe we could start there a little bit and just talk about how the industry has evolved since that time. What did the vision look like in 2000?

    [00:09:45.870] – Ed Goffin

    Yeah, so I mean, I’ve been here for half of the time, but I’ve got to get insight into the history of the company just from some of the people I know and being here. But if you go back to when the company was started, it was really around the idea of delivering real-time video over ethernet, which now It doesn’t sound groundbreaking, but at the time it was quite novel. And at the time there was a lot of fragmentation within machine vision. And a lot of the work that’s now out in the field and deployed, like the things that we see on a common basis, like around intelligent traffic monitoring and license plate reading, all of that stuff was still really in a lab, right? Or it was… It was but in very fragmented ways, right? And there wasn’t a lot of cooperation or integration between vendors. So really when Ploera was founded by George Chamberlain, who’s still involved in the business and his partner Alain Rivard and Alain is also still involved in the business in consulting ways. They had the idea of delivering real-time video over ethernet and you could extend cabling, you could use different types of processing for video, and Pleora along with a number of other players at the time then also started to work towards standardization.

    [00:11:11.690] – Ed Goffin

    So making it possible for vendors to integrate their equipment together to create full solutions around applications. So economic, you know, I knew people coming to the company in 2000 and Ottawa as a technology hub, it was a pretty bleak time. We’d had a huge ramp up with the Y2K, which for anyone involved in telecom at the time, and especially if you were on the networking side, there was two really good years of getting ready for something that’s… A lot of activity. There’s a lot of activity and then, you know, 2000 hit and all of a sudden, it was a real sort of a definite slowdown, especially in tech. And, you know, Pleora at the time had the guts or the gumption to go and start a new company. And I do remember people that I knew coming to this company and sort of questioning them at the time of like, really, you’re going to go to a startup at this time? And then oddly, like, you know, 10 years later walking into the company and realizing like, oh, this is Pleora, the company that was created back in 2000.

    [00:12:35.920] – Josh

    And so what would you say, given that auspicious start, what are some of the key developments since then? Maybe in the machine vision industry or just that Pleora has been involved in? What have you, how have you seen things change since then?

    [00:12:50.030] – Ed Goffin

    I think from a wider industry perspective, in those early years, the standardization of machine vision has played a big role in the ability for vendors to work together and then system integrators to create multi-vendor systems that integrate and work together. It also fostered a lot of innovation and partnerships in the industry. We’re even, I mean, within the, we’re still very actively involved in the standards organizations. And in a lot of, I know from other industries I’ve worked in, the standard organizations, they’re still competitive. You want to be really careful about what you share with your competitors. In the vision industry, there’s a lot more cooperation between companies that we still compete, but we recognize that if we can work together in areas, we can advance the industry and all benefit, which is a bit unique. And I think in the early days of Pleora and in the, I’ll count those early 2000s as like the earlier days of machine vision, even though machine vision obviously predates the creation of Pleora, standardization played a large role in really moving vision technologies from labs and research benches actually out into the field. And then I think you accelerate, you know, 10, you know, the 2010 to 2020, I’ll say, you know, a real rapid adoption of machine vision in industries that may be, you know, beyond industrial automation.Jump to current word00:19:55.9602.0×2.0x

    [00:14:29.790] – Ed Goffin

    Like a lot of the focus for companies like Pleora was originally industrial automation, but then there’s a lot of applications now outside of industrial automation that have adopted vision in medical imaging and security and defense and transportation…

    [00:14:45.290] – Josh

    You mentioned standards work that Pleora has done, and I know you’ve been involved in the development of the new GigE Vision 3.0 standard. Is that right?

    [00:14:55.290] – Ed Goffin

    Yeah. So we have representatives on the standards committees that are working now. They worked on the original GigE Vision standard and then the latest incarnations of that standard. And now we’re working again with companies who would traditionally be competitors or peers on the creation of GigE Vision 3.0. And so that standard really pushes the bandwidth envelope for GigE Vision, where we can start to hit 25 gig and up to 400 gig. Using different technologies, RocE, EV2, RDMA over converged Ethernet, which solves some of the processing issues that can hinder GigE Vision  past 10 gig. That’s a key part of GigE Vision  3.0.

    [00:15:46.930] – Josh

    Maybe expand on that a little bit. What’s the key driver behind the development of that standard?

    [00:15:52.050] – Ed Goffin

    The key driver is really the bottleneck for GigE Vision  is on the processing side. You can increase the bandwidth on the camera side, but your processing still has to receive and consume those images. With GigE Vision, once you get into that 10 Gbps point of view, you can overwhelm your processing and you end up– it impacts performance in terms of latency or drop packages. What GigE Vision 3.0 does using RocE v2 is solves that processing issue really by bypassing the CPU and going direct to memory. You can save your CPU on the processing side for your machine vision application and receive the images direct into memory. There’s way more technical complexity to it. I’m giving it a very high level overview there of what RocE is doing, but it’s a technology that’s actually come from data centers. It’s how a company in that data center space move vast amounts of information from server to server and rack to rack. We’re adopting some of that technology to solve some of the processing issues in machine vision.

    [00:17:09.890] – Josh

    That’s pretty fascinating. I was doing a little bit of my own reading before we talked. This was not a technology that I had come across before. It sounds like high performance compute. As well as AI/ML type applications take advantage of this. And then, yeah, where did the idea come from to apply that to vision?

    [00:17:29.420] – Ed Goffin

    As applications become more complex, there’s more demand for more bandwidth, right? There’s a ton of one gig deployments still out there and they’ll suffice. And then you can do two and a half, five and 10 gig. But as you get into applications where you can’t compress video or you need tremendous amounts of video to make real-time automated decisions, you need to start looking at beyond 10 gig.

    [00:17:59.220] – Josh

    Are there particular industries that this standard is designed for or applications perhaps?

    [00:18:06.580] – Ed Goffin

    From our perspective, where I think it’ll get first deployed is really in those specialty high-end cameras or imaging devices, thermal imaging, near IR, those types of applications. So we’re seeing customer interest in security and defense spaces where they’re doing perimeter monitoring of a facility. That could be a military base, it could be an airport, it could even be like a warehouse situation where you want multiple cameras networked together, multicasting, giving a full view of that facility, and then using automated processing to detect intruders or, you know, in the case of an airport, is the vehicle not where it should be or is a ground staff not where they should be? Are they in an area of danger? There’s that and then medical imaging as well. So again, I mentioned x-ray right at the start, but x-ray applications where again, can you transmit more data faster from a dynamic x-ray panel? And there’s benefits then. You can reduce the radiation dose for the patient so they’re not in front of the x-ray as much, but you’re still getting that full sweep of information.

    [00:19:26.290] – Josh

    So it sounds like, yeah, the real-time nature of the application or the need to respond to the information that’s coming through, that sounds like that’s a common thread. Is that right? Yeah.

    [00:19:36.520] – Ed Goffin

    Respond to or the more information that the processing can receive, the faster it can make a more accurate decision.

    [00:19:45.720] – Josh

    Obviously, there are some big things coming down the pipeline and maybe you could talk about some of the themes that Pleora is thinking about and how we can expect that to turn into new product offerings in the next year.

    [00:19:57.080] – Ed Goffin

    Yeah, there’s a few interesting areas. So I think one, I touched on a little bit with RocE, but one of the things that we consistently get asked for from our customers is solving that process inside, the image receive side, because as they increase the bandwidth of the imaging device, they’re overwhelming processing. So RocE is one way that that can be solved. So we see RocE going into, it’ll get designed into the imaging devices itself, but it could also get designed into, or it will get designed into frame grabbers. So you’ll have RocE enabled frame grabbers that can then help bypass process. We’re going to be releasing a suite of products that will use Thunderbolt. So it’ll be GigE Vision over Thunderbolt. And it’ll help you bypass traditional bandwidth limitations on the processing side, but also let you expand what types of processing you can use. So that could make it easier to use laptops for higher bandwidth. And we’re talking like 10 gig plus types of applications. So you can use 10 gig cameras and transmit the video to your laptop. Or we see a lot more push towards single board and embedded processing, where again, maybe you don’t have an Ethernet port on that system on module.

    [00:21:25.110] – Ed Goffin

    But you probably have a Thunderbolt port and you can take advantage of Thunderbolt in vision applications. So that’s novel and new. The other area I think that we’re seeing a lot more of is multi-sensor networking. So applications that will use visible cameras, thermal cameras, IR cameras, all in one application. So I sort of touched on like perimeter security, but a lot more, we’re getting asked a lot more around, you know, how how can I integrate these different types of cameras from these different types of vendors into one frame grabber or one board or one bridging device so that then I can build a system around these types of applications or these types of imaging devices?

    [00:22:10.320] – Josh

    You mentioned frame grabbers a few times. I think that’s really interesting because that feels like technology that’s been around for a long time and now you’re pairing it with this super cutting edge technology on the back end. What do you feel like explains the longevity of the beloved frame grabber?

    [00:22:30.420] – Ed Goffin

    That’s a great question and it’s, you know, the beloved frame grabber. I mean, even, I will go back maybe six years ago in our internal questioning of what is, we’re fairly well known as a frame grabber company. We saw external frame grabbers. For camera link to GigE and camera link to USB. And we at the time thought, okay, there’s a short lifespan left for these devices and people will find other ways to bypass the frame grabber. But they haven’t. It just continues along. There’s a lot of good reasons to use frame grabbers. They’re easy. In our case, we provide an external device. That’s a little bit novel and lets you digitize legacy cameras. I think part of the reason why Frame Grabbers have existed for so long or will continue to exist is there are a lot of legacy cameras deployed. And there’s not a lot of good reasons to replace those cameras. They still work. They still work in the application, especially in the high value space where maybe, you know, I go back to the very start and I talked about ground vehicles. That’s a good example where they have analog cameras deployed, they work, they don’t want to replace them, but they want to digitize the infrastructure behind them.

    [00:23:50.060] – Ed Goffin

    So a good way to do that is through, in our case, through an external frame grabber. So it gives you, it extends the life of installed infrastructure. And that’s where we really see our continuing demand for the frame grabber portfolio we have. It’s really around digitizing existing infrastructure. And then you mentioned the evolution of the Frame Grabber. For us, we’re looking at integrating RocE capability into a 10 gig Frame Grabber. So again, there you can use the legacy camera, use a higher bandwidth camera, but take advantage of some of the benefits of RocE in terms of reducing your processing overhead, doing that work within the Frame Grabber. Next. And it’s in our roadmap is looking at smart frame grabbers. So what can you put some of the processing that happens in the computer within the frame grabber itself and do some of that work? So, you know, we foresaw possibly the death of the frame grabber a few years ago and realized, no, it’s a technology that continues on.

    [00:25:00.490] – Josh

    Interesting. You feel like that’s primarily because of the way that it enables this legacy architecture to sort of modernize or to integrate with more modern back-end infrastructure. Is that right?

    [00:25:12.530] – Ed Goffin

    Yeah, there’s a lot of deployed infrastructure where it just, it doesn’t make sense to replace the camera, but maybe you want to take advantage of, you know, I’ll use an example in like an industrial automation setting where you have camera link cameras deployed and they work, but you want to take advantage of, GigE vision for cabling distance. You can move processing to a central office or you want to take advantage of the networking and multicasting abilities of GigE to integrate these cameras together in new applications and new techniques. And a Frame Grabber lets you do that without replacing all that equipment.

    [00:25:54.790] – Josh

    Yeah, that’s important. What’s the timeline look like for when that will hit the market?

    [00:25:59.350] – Ed Goffin

    We’re going to have products available in early calendar year. So January, February, March 2026, we’ll begin sampling those with lead customers. Those will come in a couple of different flavors, so they’ll be an embedded video interface, so a board that you can design directly into an imaging device to create a GigE Vision 3.0 compliant product with RocE V2 capabilities integrated to it. And around the same time, we’ll be introducing a 10 gig frame grabber integrating RocE capabilities. And those should be, you know, work is still ongoing on the standard, but the last update I saw, you know, the standard sort of looking for release in early 2026 as well.

    [00:26:48.660] – Josh

    Well, thank you for your time this morning. Where can our listeners learn more about Pleora?

    [00:26:53.780] – Ed Goffin

    Yeah, so our website, pleora.com, it’s a great place to start obviously. It’s a good summary of the products that we offer and the types of customers that we serve. And you can see the wide gamut of everything from medical imaging and x-ray applications we’re designed into, whether those are medical or industrial applications to traditional machine vision applications around quality inspection lines and industrial automation and then all the way to some of the things we’re doing in security and defense, whether that’s embedded interfaces going directly into drones or these new types of more portable and threat detection types of systems.

    [00:27:37.080] – Josh

    Any events that we might look forward to seeing Pleora at in Q4?

    [00:27:42.120] – Ed Goffin

    Yeah, I mean there’s a couple of Canadian security and defense events that we’ll be at. Probably are pretty nichey for who your audience is. I think the next big events that we’ll be at will be the SPI Defense Show, which is actually, I guess, early in the next year. So there will be showing some new products specifically for that security and defense market. And then obviously, you know, this is a year away, but Vision Stuttgart next year, it feels strange to be talking about a show that’s a year away. But lots of new demos there where we’ll, you know, by then RocE is out in the field and we’ll have some RocE applications that we’re showing. We’ll be looking more at this sort of multi-sensor networking area and how Vision is combining with embedded. So I think next year, Stuttgart will be the culmination of that. But at a number of shows next year, we’ll be showing a lot of the new technology that we’re working on.

    [00:28:38.030] – Josh

    All right. Well, thank you again for your time.

    [00:28:41.240] – Ed Goffin

    Yeah, thanks a lot for having me.

    [00:28:47.000] – Josh

    For more from Ed and Pleora, find him on LinkedIn at linkedin.com/in/edgoffin. That’s E-D-G-O-F-F-I-N.

    [00:28:58.040] – Josh Eastburn, Host

    Links in the show notes. And if you work with Real Time Vision Systems yourself, I’d love to hear from you also. Drop a comment or a DM on LinkedIn. Or reach out to me at josh.eastburn@mvpromedia.com if you would like to appear on the podcast. For MV Pro Media, this is Josh Eastburn.

    Most Read

    Related Articles

    Sign up to the MVPro Newsletter

    Subscribe to the MVPro Newsletter for the latest industry news and insight.

    Trending Articles

    Latest Issue of MVPro Magazine

    MVPro Media
    Privacy Overview

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.