Mv Pro 003 – AI and the Real Cost of Vision Systems w/ Mark Williamson

An insightful conversation with Mark Williamson, former chair of the VDMA Machine Vision Board. The discussion delves into historical cost structures for machine vision, technological advancements, and the transformative impact of AI on future cost reductions. Mark maps out the landscape of the machine vision industry and helps listeners understand how to apply machine vision successfully for their market.

Click the arrow to easily access sections of the podcast transcript below!

1.Introduction and Industry Background

  • Mark shares his extensive background in the industry, dating back to 1986, highlighting key milestones such as the early days of machine vision systems and his experiences with nuclear power station inspections and food safety systems.

2. The Evolution of Machine Vision Systems

  • A dive into the historical development of machine vision, including the rise of standards like Camera Link and the shift from custom cables to standard interfaces.
  • Mark discusses how improvements in processing power, camera technology, and frame grabbers have shaped the landscape of machine vision and the challenges of early systems.

3. Current Trends in Machine Vision Technology

  • Exploration of transformative trends in the industry, including the impact of AI on machine vision, particularly in smart cameras and deep learning applications.
  • Mark emphasizes the growing importance of AI, from basic pattern recognition to anomaly detection and self-learning systems that can improve over time.

4. Simulation and Digital Twins in Vision System Development

  • The role of digital twins is introduced. Mark discusses how companies such as Medabsy are now using digital twins to simulate vision systems and errors in a virtual environment, reducing costs and accelerating the design process.
  • The conversation touches on how this technology, aided by AI, is helping companies create synthetic images to identify defects, reducing the need for physical test rigs and speeding up the validation process.

5. The Real Cost of Machine Vision Systems

  • Discussion on the often-overlooked costs of implementing machine vision systems, including integration, software development, and ongoing maintenance.
  • Mark explains how a lack of in-house expertise can lead to costly mistakes and emphasizes the importance of using trained integrators to ensure systems are properly implemented.

6. The Future of Machine Vision and AI Integration

Mark discusses the role of generative AI in reducing costs and enhancing machine vision capabilities.

He touches on the future of machine vision, particularly in non-industrial applications like robotics, logistics, and automotive systems, highlighting emerging areas of growth.

7. How to contact Mark Williamson and Josh

Important links and contact:

Episode Transcript

1.Introduction and Industry Background

Welcome to the MV Pro podcast. This month, we dive into a conversation with Mark Williamson. His article in the March issue of MV Pro magazine analyzes the cost of machine vision implementation from a historical perspective and predicts the role of AI in bringing down those costs in the future. Mark is, among other things, former Chair of the VDMA Machine Vision Board, former Director at Stemmer Imaging, and founder of Pinnacle Vision. Please enjoy our conversation. 

[00:00:27.440] – Josh Eastburn 

Okay, very good. Why don’t we start with some introductions? I know you as MV Pro’s European editor at Large, but I understand you’ve got a long history in the industry, and I’d love to hear about that. 

[00:00:39.150] – Mark Williamson 

I actually first touched what you’d probably almost call vision as I left university, which would have been 1986. But the first time I probably touched real computer vision,I was an engineer then, and I was doing 3D ultrasound imaging. But real machine vision started in 1989 when I joined a company, and they were mainly doing big rack mount computer systems and took on a company called Imaging Technology, which is now part of Teledyne. At the time, you built your vision system by choosing your algorithm and plugging a card in for each process in the algorithm to create, it was called a pipeline processor. Got in starting selling that. And, at the time, each card was probably $5,000 or more. And it was limited applications, a lot of research. But the two big ones I did were inspecting the fuel pellets for nuclear power stations. You can’t have a human inspecting them, a bit dangerous. So that was a really good use case where people were prepared to pay hundreds of thousands for a complete vision system to inspect. And the other one, there had been a scandal of contaminants, metal and grit, getting into tinned food like baked beans. 

And there was an X-ray system where wanted to inspect every baked bean can. And again, that needed a pipeline processor to do the processing, because at the time, just with a standard AT bus of a PC, it would take, probably get a maximum of 10 frames second, and then to process anything on a computer would take forever. At that time, it really was big money, big systems. And then to program on, you had to know the algorithms, you had to know what you were doing, and you were programming at a very low level. Obviously, I did that. I set my own business up in 1997. Doing machine vision, we tried to take a slightly different approach rather than just being a distributor, because at the time it was mainly distributors. We wanted to add additional services, so I had a development services group where I had some engineers, and a lot of customers wanted to use machine vision, but didn’t have the expertise. Now, that’s different than being an integrator. An integrator will go into a factory and install a vision system. But this was OEM customers, so people building, let’s say, bacon slices, or something where they don’t want to be paying the high price per system. 

They want to buy the components and integrate them into their machines that they’re selling lots of, but they wanted the application and the design of the system done by an expert. So that’s where we came in. And that grew really quite well. After a year, Imaging Technology that I mentioned earlier got acquired by Cureco, which was a brand I was dealing with. And I, therefore, agreed to merge with my arch competitor, the late David Hearn, with a company called Vortex Vision. My company was called Pinnacle Vision. And we created the largest UK distributor. At the time, Stemmer Imaging, which was just in Germany, Willy Stemmer had a business, had a very similar model to us and had this concept. The trouble with vision is there’s so many different variants of the products. In one country, the revenues and the volumes are not high enough to hold much stock because everyone wants something different. So it’s not as if you’re selling apples and there’s only one type of apple, there’s hundreds of them. So his concept was we’d have a central warehouse all across Europe. We were the first acquisition. So I joined the leadership team in Stemmer, I was running UK distribution as Managing Director, and we then acquired all the other companies in Europe. So we ended up full coverage across Europe. And then in 2017, he sold to a venture capital company. When COVID hit, I decided I’d had enough of traveling one week in Germany, one week in the UK. So I stepped back and focused on building the UK business again. Till last year when I said, It’s about time we reach mutual agreement to step away. So I’ve had a gap year to celebrate my 60th. That brings me to now.  

2. The Evolution of Machine Vision Systems

Price-wise, as I’ve talked in the article, you think prices have come down, but the biggest cost is people. That’s still the way. It’s still to do a good job, do it properly. 

[00:05:12.560] – Josh Eastburn 

The thing that caught my attention in your article, I think was first that you gave a sweep of some of the last quarter century of history of the industry. That always catches my attention. I feel like that’s always a sign that someone’s been in the trenches and they’re speaking from experience, which I really appreciate. 

[00:05:27.540] – Mark Williamson 

I’ll give you one of the big examples. Stemmer made a lot of money doing this. Back in the day when we started, there was no standard for a camera connector. So every camera manufacturer had its own connector. Every acquisition card had its own connector. And when someone wanted to buy, the first job they would ever have to do, if they were going to buy everything separately, is to wire a cable. And these cables were like SCSI cables, 68 pin microconnectors. Absolute nightmare. We ended up doing cable business. That cable business still goes well, but now it’s all standard gigabit Ethernet, USB cables, still specialists with high flex and stuff like that. But you go back to the basics of how it used to be. It was a project probably for two weeks to get the system up and running. 

[00:06:17.300] – Josh Eastburn 

Yeah, I started off in the industry building cables and panels, probably like a lot of people. You got one bad connection and there’s hours of troubleshooting.  

You mentioned the introduction of camera link. 

[00:06:29.530] – Mark Williamson 

Yeah, that was the first standard in our industry, really, that came out, and it literally was a physical standard camera link. What I mean by that is it standardized on the connectors and the speed and the cable length that you could have at the different speeds. And that meant you could just buy an off-the-shelf cable. Every camera still had its own protocol, so you had to control with normally via a serial line, configure all the settings in camera by sending commands down to it, and every camera was different, so it was starting from scratch and plugging it all in. But it was a game changer. We also had a similar thing for the analog. But again, it was a standard cable, but even people, even different camera manufacturers used different pins on the same cable. And then frame grabs had to be programmed to work out which pin had what signal on it. It was still very complicated. 

[00:07:27.040] – Josh Eastburn 

Yeah. So given that history and comparing those points in time to where we are now, right? Obviously, we’ve come a long way. What do you feel like are some of the transformative trends that you’re seeing in the industry right now? 

[00:07:39.480] – Mark Williamson 

I see vision as a pyramid, okay? And when we started, the simplest application was complicated. And then what happened is, as processing power got better and everything came up, you could do that with a cheaper product, lower cost. But the processing power also then opened up more applications, more difficult applications. So I mentioned in the article that the frame grabber companies build more frame grabbers now than they built back then. And I remember people saying, Oh, the frame grabber’s dead. Now we’re going to use Ethernet, USB, and Firewire was the other one. It would kill it, but it hasn’t because they still wanted to go faster. So the new cameras started going faster. Oh, Ethernet’s not fast enough. Okay, we’ll go to Camera Link HS or CoExpress. Some new applications that weren’t possible before keep coming up. And at the lower end, you then got, again, the statement of a smart camera was going to kill the PC-based vision system. And it didn’t because, yes, it bought the price down for the entry level, and each smart camera gets more and more capable. And now we’re starting to see them coming out with AI processes. 

[00:09:01.180] – Mark Williamson 

Until recently, to do an AI application, you needed a PC with two or three very powerful GPUs in. That will carry on happening. But at the lowest point now, I’m not sure we’re going to see that much cost erosion for a modular vision system. Set $1,000, perhaps you can get them a bit cheaper, very limited. But you then get higher resolutions, higher frame rate, higher processing engines, and it will go up. The same with standard off-the-shelf USB cameras. They’re starting. One in a housing is probably a couple of hundred, $300. Will that get much cheaper? I think you’re not going to get much cheaper because of the physical costs of doing it. You might get board levels that get cheaper and all of that, which means they can get integrated more. But then when you come to do a proper application, the cost is still, okay, I need to find the threshold points of the defects.

[00:10:00.000] – Mark Williamson 

So many people come along with one defect, say, Can you inspect this defect? And the integrator that’s not trained will image it. Yeah, I can find that. That’s good. Then they put the system in and then it rejects something and the guy says, Well, actually, it’s rejected that. Well, that’s not what you showed me. It was a variation on that. And in some applications, these errors only happen very occasionally. So,aving a data set is really important, and that’s what takes the time. The two things that happened there, there’s standards that have come out. BDMA have a standard that talks about how to specify a vision system and how to do that, how to create your sample-based, how to test the system. So the customer and the integrator know the pass is, it tests these items. These are the ones that are marginal. These are the ones that pass. These are the ones that fail, and it’s done. If someone then comes along with a different sample, you have to reprogram it or change it, because if you haven’t seen it, you can’t program it. So that’s the first thing. 

4. Simulation and Digital Twins in Vision System Development

So, where I’m seeing for cost reduction coming through now is how can you make that quicker? At the moment, you have to basically build a test rig to have all the samples and prove the system will work.  

What if you did that in a virtual way? There are companies now and research, Fraunhofer was doing it, Medabsy, the startup, where they’re doing two things. One is creating digital twins of the cameras and optics and they’re teaming with companies. So literally, you can build your vision system in a digital environment. This has been going on. This is how aircraft are built, designed and missiles are designed. Digital twin and digital simulation have been around for a long time. And especially in robotics, designing a robotic infrastructure, you don’t get your car there and put a robot next to it and see if it can do this. You simulate it. They have simulation environments. So we’re beginning to see that happen with machine vision. I’m not sure it will be used in the really simple applications. People will still stick a camera on and say, does it work? Doesn’t it work? If it’s barcode reading or blob counting, that’s probably fine. 

[00:12:31.390] – Mark Williamson 

But where it’s more complex, that Medabsy are saying they reckon about a 30% reduction in time to be able to, and also not having to have the physical equipment there to purchase, to prove that it will work. And what they’re also doing is creating simulated errors. And that’s an AI area coming in on that to be able to say, well, actually, that’s the defect. Let’s create 20 versions of that defect getting smaller or, all the different ways and creating synthetic images, and then put it through the algorithm, you can see where the boundary is of where it passes and where it fails. That is the trend in the way you build a vision system. That’s not talking about what’s the latest and greatest new technology that vision system is going to address. But yes, it’s very much becoming and getting the cost down. Now, I’ve had a lot of discussions in LinkedIn with various people about, will ChatGPT-kind technology help? And I actually think the problem at the moment, people don’t actually understand really what ChatGPT is trying to be. And you think, oh, so someone posted, oh, I asked ChatGPT to choose the right lens for a camera, and it got it wrong. So my job’s safe. 

But if you talk to any experts, they’re ChatGPT isn’t there to replace the expert. It’s to make the expert more efficient. We have a shortage of very clever, knowledgeable engineers in the machine vision market. It takes 20 years to get 20 years’ experience. I actually think there is still scope. There was a roundtable at a Vision last year where a number of people were asked, and lots of topics came up saying there’s a lot of intellectual property that remains in companies. I think that’s where we’re going to see ChatGPT technology happen. Companies training their own data that’s proprietary in these tools to allow our support agents to be quicker, self-help in troubleshooting a product and that. Stemmer Imaging used to have, in fact, before Stemmer Imaging, our company at the time, FirstSight Vision, we wrote what we called the Machine Vision Handbook. It was hundreds of pages talking about all the theory of machine vision. And we gave that to customers and the idea they would come back. Now, that proprietary information, yes, we were giving it away in a book, but what if that got integrated into an advisor, a smart advisor? 

It’s not going to replace the expert. I never really use ChatGPT technologies to give me the final answer. It gives me quick hints of, Okay, that’s probably it. Right, I need to investigate it Or you do something simple: how can I pricey all this information into a half a page or a page? That’s where I think it’s going to be useful. So they’re the two things I think we will see coming along. It’s going to take time, but that’s what’s going to happen, I believe. 

5. The Real Cost of Machine Vision Systems

[00:15:49.150] – Josh Eastburn 

So in your article, you say the real cost of vision systems isn’t in the hardware, it’s in the integration software and ongoing maintenance. Who do you think really needs to understand that? And what are they doing that they’re really going to regret five years from now? 

[00:16:05.880] – Mark Williamson 

I remember a customer come to me, Oh, does your vision system do X, Y, Z? And the answer is what it CAN do. And the trouble is people that bigger companies that understand they have proper engineering departments, they either will have a vision group or they will have a friendly integrator that they understand things cost money. But what you get is the people that have a problem, they want a quick fix, they don’t know anything about vision, they could buy a smart camera for a thousand. They don’t understand what the difference between a £1,000, a £5,000, £50,000 system is. And this happens so many times that when we sold some kit to a company that didn’t have the in-house expertise, one of this is a big learning I had, you always learn in life. The big learning I had was, if someone hasn’t done it before, really encourage them to go to an integrator. So we had people that’d get the visions, they got their engineer who thinks he’s really bright. The managing director thinks he’s IT. IT was a great guy. Oh, our IT guy is going to do this because he owns IT. He doesn’t know anything about machine vision. They’d do the system and then they get it running and it’s working, and that’s good. And then a week later, they’ve got skylights in their factory, and the sun comes out because they’re in England, so we get a lot of clouds. Then suddenly, one day, the sun comes out, shines down, floods the light, because a human eye can adjust massive light’s ranges. Cameras can’t. You have to change the aperture and all this. Suddenly, it stops working. Oh, your vision system stopped working. Your vision system is not very good. Vision got a bad name. How many times I walked through factories where I saw vision systems turned off because they hadn’t been integrated correctly. They hadn’t had a shroud over them or the right filters on the lenses to filter out sunlight and all these different things. That’s still happening today. Obviously, smarts and how to set things up have become a lot better. Also, the algorithms have become a lot better. If I go right back to the early days, to count blobs, you did a fixed threshold. So if the light changed, then they came up with adaptive threshold, which was looking at contrast changes. 

[00:18:34.490] – Mark Williamson 

Therefore, if it got brighter, there would still be a contrast change at an edge, so you could still find it. That was the next thing. Now you have AI looking for the pattern, so it can find far less obvious edges. The algorithms are all getting better, trying to deal with the lack of knowledge, what people’s got. Then the next thing you get is, okay, we talk them into going to an integrator. And an integrator isn’t going to charge on a thousand dollar camera, they are not going to charge double the money because if they’re cheap, they’re two days, one day. They’re going to charge if they say, Give me all the samples from the good, the bad, but just thinking the work that you’ve got to do to validate it, the validation is probably three or four days. So straight away, they’re coming in with a price of two week’s work on that 1,000, 2,000. So suddenly you’re talking a 20,000, not a 1,000. Now, I’m not saying that’s the case every time. There are very simple applications and like barcode readings now getting to the stage that that doesn’t matter. Fixed industrial scanner for barcode. 

And if you look at these barcode scanners now, they have their own built-in lights. They have autofocus lenses. When they grab, they’re not just grabbing one image, they’re actually grabbing hundreds of images in succession, changing the lens focus, changing the lighting, even different lighting, different angle lighting, trying to be a super camera because it doesn’t know what it’s looking at. And then all it needs to know, can I read it? Because I know I’m looking for a barcode. And then that works. So that’s the other thing that’s going to happen. More and more applications are going to be de-skilled. But as soon as you’re actually trying to do proper inspection, I find that hard. Again, they’re talking about AI, of how you do that with AI, and you feed it loads of samples. One of the things that was really interesting with deep learning, which has been about for a while with machine vision, is once you’re outside the programmed area, it will say this is a defect or whatever. You can reprogram it, but that will take time. You’ve got to reprogram the whole deep learning.  

The next thing that came along was anomaly detection. 

[00:21:00.020] – Mark Williamson 

Detection, where you only show it good things, and it will identify when it starts to see things that aren’t good. And that was anomaly detection. That was a game changer. Now we’re talking about self-healing, self-learning, where actually the anomaly detector starts to build. Well, actually, I’m finding these anomalies, and I’m actually going to train on these different anomalies and start to group them. And okay, there still needs to be an operator. But you imagine the deep learning system is now beginning to say, well, I’ve looked at everything. These are the areas that we’ve got things that are drifting and going out. That’s really useful. So I think that area will help. It’s coming along, it’s on the horizon. People like MVTec have got technology that does that, but you’ve still got to program it at a library level to implement it. And I haven’t yet seen a super off-the-shelf self-learning inspection system, like a human would be. Imagine a human sitting there, everything going by, and you talk to the expert inspector in a quality control department in a company. They’re getting a product, they quickly They get to over 20 years or 10 years or whatever, they get to know, okay, I know what I’m looking for.I know what’s a defect, but their attention span is only five minutes. So if they’re doing a eight-hour shift, I’m not quite sure how good they will be. It will be a lot better.  

6. The Future of Machine Vision and AI Integration

[00:22:36.400] – Josh Eastburn 

And it sounds like you’re speaking to the person who’s looking at the hardware price tag and saying, oh, Vision is getting so much more affordable, right? Let’s go ahead and implement all of this. We’re going to be so happy. We’re going to see this immediate return. Is that a correct conclusion? 

[00:22:55.340] – Mark Williamson 

I’ve had people come to me saying, Oh, we’ve got to add machine vision to our production line. So I say, well, what’s the problem? Have you got a defect problem? Have you got something that goes wrong and is costing you money? Because you haven’t got anything that goes wrong and doesn’t cost you money. You can say, yes, we’re 100% inspected, but that’s irrelevant. I remember one system. It was installed only to last three to four years. A company that was printing postage stamps in the UK. And they had to cut the postage stamps. It’s a printing machine. And then they had to cut the perforations and everything. But their machine was so old, it was 30 or 40 years old, that all the tolerances have gone out from what is that the cut has varied. The quality could be that actually you’re cutting 10% of the picture off with a load of white next to it. We introduced a vision system that would be monitoring the drift because we actually worked it, it was a drift. And that started just with a high-speed camera, looking at it, seeing what was happening. We can actually put some feedback into that. So that was a real research project.  

Vision has to solve a problem. And a lot of people try to put vision in because it’s trendy. The other area is vision in automation, doing the feedback and being part of the process. And that’s where I think it’s exploding a lot more now. The inspection stuff, if you engineer and manage and produce well, inspection is less of an issue unless you are doing something that costs a lot of money. Battery inspection for cars is big business. Unfortunately, most of it’s all been designed in China, in that area. There’s a lot of work in Europe, but obviously, battery inspection technology, most is coming from China at the moment. If dust gets onto a bit at Process A, it will fail at Process C. So you’re better off not doing Process B, C, and D because you reject it at Process A, and they’re expensive. So there’s a big return. Like I talked about fuel cell inspection with the nuclear industry. There’s a real reason for it. So we’re also seeing that in robotics. Again, robotics doesn’t need to have 3D, it doesn’t need to have inspection. If the product is always in the right place and it’s always put in the same location, it doesn’t need vision. 

[00:25:38.960] – Mark Williamson 

But if it’s been picking loads of random bits, then a camera to work out, Oh, that’s the bit. I have to pick it up like this and do it, then it’s more important. So I actually see that one of the bigger growth at the higher end of machine vision is in that. And other areas of logistics is going crazy. So again, originally, what’s logistics? Barcode reading, fixed industrial scanners. Then they started building these tunnels where any box and area orientation goes through the tunnel. And if there’s a barcode on it, it will find it, it will read it, it will go. And So that technology has been used in airports for a long time in routing all the bags to the right location and that stuff. Logistics is a big growing area, certainly after COVID. And it’s not just barcode reading then. It is, how big are the boxes? How many boxes can I put on this lorry? So actually doing something with that information rather than just inspecting it is, I think, the big opening future of where vision is going, and even more so with embedded vision. So we’re seeing embedded vision going into robots, lawn mowers, robots, vacuum cleaners in the house. 

It’s all using vision.The machine vision industry hasn’t touched what Tesla have done. There were eight cameras in their cars. Everything’s made OEM. But I’m not aware of anyone in our machine vision industry that’s actually succeeded in doing that, even though most Well, either the car can or the infrastructure. Are you driving in a bus lane when you shouldn’t be able to? And that’s where I see the biggest growth. And if you look at the VDMA statistics over the last few years, while machine vision, industrial, factory, automation stuff is growing, the non-industrial is growing quicker. 

[00:27:38.610] – Josh Eastburn 

So in addition to what you said earlier about understanding where the cost impact of your system of orientation is going to come from. It sounds like a fundamental is also making sure that you are solving an appropriate interesting problem. 

[00:27:53.710] – Mark Williamson 

Yes. I think with vision, the big thing is we have to understand probably the whole marketplace. If you’ve got an OEM that’s designing vision into a mass production product, take, I mentioned the Tesla self-driving cameras and all that stuff. The price of the component is absolutely essential. The development cost is less essential because it’s being amortized over hundreds of thousands, tens of thousands of products. You then get the industrial market. Where there is a common problem in manufacturing, someone would have designed a machine for it. That’s an OEM in my way of machine vision. So that could be, I want to have my eight slices of bacon at 400 grams in a pack. And a vision system can then change the thickness of the slicer to make sure that we’re not giving away too much bacon, and then it gets sealed. Now, those companies, when they’re in production. they might only be building 10, 100 a year. They got to be very cost-effective. They won’t have the expertise in vision unlikely to be in their company. So that’s where someone like Stemmer or whoever would actually do the design with them, work with them in integrating a vision system into their machine. 

[00:29:27.110] – Mark Williamson 

You then get the I would call the factory automation. Every factory is different, every product is different. I have a problem. That’s where the integrators come in. And that’s where I find the cost of implementation there is a lot higher, mainly because it has to be designed and validated once. I want to build two production lines or three production lines or four production lines. And sometimes integrators move from being an integrator. They see an opportunity because they solve a problem on a particular customer and they go and end up creating a product. I remember a company that did it with the plastic milk bottles, checking there’s no black specs in the reprocessed plastic in the bottles and big That was a big market. So they had a problem with one customer. They then created a product. People get it wrong because they don’t understand vision. Unless there’s someone like Johnson & Johnson or one of these big manufacturing companies is where they have an engineering team looking at their whole processes. They can afford to have a vision team advising and saying, Oh, we can solve this problem, solve that problem. So they tend to integrate themselves. 

[00:30:44.810] – Mark Williamson 

Problem as a supplier is you’ve got customers that are going from the guy that knows nothing, thinks he can buy a vision system and solve his problem. He’s going to be a nightmare. Do you want to give him free support? No. You want an integrator to charge him to do support. You then get the big companies that buy vision, you want to support them, but they’re going to buy quite a lot of stuff. You then get the OEMs, where you get involved actually doing the engineering with them, which is even better. And quite often, when you say to them, Would you want us to do that? Rather than you have your engineers do it and not understand it, that works really, really well. And then you’ll get the mass volume where they will have hundreds of people working on their vision system because it’s so fundamental to the future of the business. And that’s where… Semicon have always been like that. So there’s big companies that make the way for inspection systems, Now we’re getting battery as the next one, silicon wafer inspection, PCB inspection. These are machines that people build machines that do that inspection. 

[00:31:56.190] – Mark Williamson 

Everyone that’s building a PCB will have a PCB inspection an AOI machine now because ambition is key to that. That’s core to what they’re doing. There’s no smart cameras in those. Maybe to read the barcode, it’s all high-end. I remember every time one of our customers was on that and the Silicon Wafer, the process side is getting smaller and smaller. They went to a new time. They had to throw away all their old systems because now… But to get the economy of scale, if you’re going to have something twice as small, half the size, the vision system has to run twice as fast at the higher resolution because when they change processes, the process has still got to take the same time. To get the return on it. So what happens is, their cycles, I don’t know what it was, every seven years, every 10 years, it was like, this is the next this is the next nano-sized, cell-sized. They would re-buy it. And that’s how people these big semi-con inspection companies. They were going to the camera companies, right, no, we need to go to 50 megapixel, we need to go to 100 megapixel, we need to go to this, and it needs to run at the same frame rate. 

And that’s what’s driving the biggest markets. That’s what’s driving this high-end stuff and why they’re still selling as many frame grabbers as they were 20 years ago. 

[00:33:23.760] – Josh Eastburn 

Well, I think that’s a perfect stopping point. You’ve brought us back full circle to where we started. Tell us more what you’re doing now. You’ve reached a milestone in your career and you’re on to a new adventure. 

[00:33:34.270] – Mark Williamson 

Yeah. I’m in the process of setting up a consulting business with some people I know. It’s not announced yet, but it’s getting there. I’ve got a lot of expertise. I’ve been in this industry for too long. I’ve done product strategy, market strategy, channel strategy. I’ve been a channel as well, so I understand what’s needed. I’ve created a menu of services. I’ve got a project where I’m going to be non-exec for a company that a venture capital is putting money into a relatively small company. Very bright people, got a great idea driving forward, but having good financial understandings about how to run a business, how do you go to market? How do you position the product? Is the product ready? So mentoring these people in companies. And then another one that I’m working with is they’ve got a product products, they need a revamp. The founders have left being acquired by another company. The product needs to be invigorated, really, and realigned. So a lot of my time at Stemmer was looking at products. So my job was Head of Product Strategy. When everyone was coming out with new products, I was looking at them and trying to work out… 

For a company the size of Stemmer, we can’t have too many lots of little companies that are niche, but picking the ones that we think are the ones that are going to trend and come in and then bring them in and develop them. And we’ve been very successful with that. That’s really looking it from the other point of view for these customers.  

That’s really what I’m looking to do. I’m open to work with anyone that wants to reach out. Good thing for me is I sold the business. I’m comfortable, but I’m passionate about this industry. If I can do something exciting with companies and help them set channels up in Europe, maybe, or whatever they need, having a track record of knowing what works and what doesn’t work. So many companies, they have great ideas but can’t execute. It’s not reliable or their manufacturing is not right, or they just hit the wrong price point. And the other one is timing. That’s what we’re seeing a lot with AI and all of this stuff going on at the moment. So there’s a lot of people jumping on the bandwagon and coming up with great ideas, but I think it just takes time. You got to get that timing right. 

7. How to contact Mark Williamson and Josh

[00:35:54.830] – Josh Eastburn 

Where can people find you online? Where should they go if they want to talk more? 

[00:35:57.740] – Mark Williamson 

The best way is to reach out to me on LinkedIn my number’s on LinkedIn, my email’s in my LinkedIn header or message me. That would be great. 

[00:36:06.790] – Josh Eastburn 

Great. Well, we look forward to hearing more. 

[00:36:09.290] – Mark Williamson 

Yeah. Yeah. Thanks, Josh. Fantastic. Really enjoyed it. Okay. 

[00:36:12.230] – Josh Eastburn 

Yeah. Thank you as well. That was Mark Williamson, Editor at Large for MV Pro Europe. To read his article, Vision Systems: The Real Cost, the Impact of Generative AI, visit Mvpromedia.com in Northern America or mvproeurope.com for Europe, or check out the March issue of MV Pro magazine.  

We are so lucky to have Mark as part of our team, but we want to hear from you, too. If you have something interesting to share with the machine vision industry, reach out to editor@mvpromedia. Com or drop us a line on LinkedIn. Until next time, I’m Josh Eastburn for MV Pro Media. Take care. 

Most Read

Related Articles

Sign up to the MVPro Newsletter

Subscribe to the MVPro Newsletter for the latest industry news and insight.

Trending Articles

Latest Issue of MVPro Magazine

MVPro Media
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.