A recurring theme at Image Sensors Europe was that the next gains in machine vision may not come from the sensor alone, but from how intelligence is distributed across the system, from pixel to processor.

At Image Sensors Europe 2026, Marie-Charlotte Leclerc from STMicroelectronics focused on what this shift looks like in practice, particularly as vision systems move into more embedded and power-constrained environments.

Marie-Charlotte Leclerc, STMicroelectronics, speaking at Image Sensors Europe 2026
Marie-Charlotte Leclerc, STMicroelectronics, speaking at Image Sensors Europe 2026

Applications like humanoid robotics, delivery drones, biometric payment, and smart building access are not just expanding the market: they are changing the requirements entirely. Power efficiency, integration, and system-level optimisation are becoming just as critical as image quality.

“The challenge is no longer just capturing the image, but managing how that data is processed across the system.”

As she framed it, this is putting pressure on traditional system design. The model of a sensor capturing raw data and a processor handling everything downstream is starting to show its limits.

Instead, her focus was on the synergy between the image sensor and the SoC.

More of the processing – from colour separation to image conditioning – can happen directly on the sensor. That reduces both the amount of data being transferred and the workload placed on the processor.

“If you can handle more of the complexity at the point of capture, you simplify everything that follows.”

This becomes particularly important in areas like RGB-IR imaging. Combining visible and near-infrared data enables more robust performance across different lighting conditions, but also introduces complexity.

ST’s approach integrates RGB-IR separation, bayerisation, IR depollution, and upscaling directly on-chip. The result is cleaner, more usable data before it even reaches the processor, especially in HDR outdoor environments.

Another example is dual shutter pixel technology, combining global and rolling shutter operation within the same sensor. Rather than locking into one compromise, systems can adapt depending on the scene.

“Flexibility at the sensor level allows systems to respond to very different imaging conditions without redesigning the architecture.”

What ties these elements together is the system-level impact.

By offloading these functions into the sensor, the data throughput to the SoC is reduced, along with the need for heavy processing. This makes it possible to use lower-power processors, even microcontrollers, while maintaining performance.

In the kinds of applications she highlighted (embedded, power-constrained, often operating in uncontrolled environments) that shift is becoming essential.

The takeaway is not simply that sensors are improving, but that their role is evolving.

Machine vision systems are being rethought from the ground up, and increasingly, it is the interaction between sensor and processor that defines what is possible.

Most Read

Related Articles

Sign up to the MVPro Newsletter

Subscribe to the MVPro Newsletter for the latest industry news and insight.

Name
Consent

Trending Articles

Latest Issue of MVPro Magazine

MVPro Media
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.