Artificial intelligence is getting more powerful and more power hungry. For machine vision, where algorithms must process vast streams of visual data in real time, energy demands are reaching unsustainable levels. Enter neuromorphic computing: a brain inspired approach that could dramatically reduce the cost of visual intelligence at the edge.
Researchers from the Australian Institute for Machine Learning and Intel Labs recently showed that spiking neural networks (SNNs) running on Intel’s Loihi 2 neuromorphic processor can perform one of computer vision’s toughest tasks, robust geometric model fitting, while consuming only 15 percent of the energy used by conventional CPU implementations.
Why Robust Fitting Matters
From 3D reconstruction to SLAM, from pose estimation to feature matching, robust geometric model fitting underpins many of the machine vision systems deployed today. Classic algorithms such as RANSAC have been refined for decades, but they remain computationally demanding, a challenge for mobile robots, drones and embedded cameras where power is limited.
By reframing robust fitting as an event driven process, the team translated its core steps into spike based computations optimized for Loihi 2. The result was comparable accuracy to CPU based pipelines at a fraction of the energy cost.
The researchers also stress that energy efficiency in robust fitting has received little attention in the past. Their work, therefore, represents not just an incremental improvement but an entirely new way of approaching a core vision problem.
“One aspect of robust fitting that has received little attention is energy efficiency. This performance metric has become critical as high energy consumption is a growing concern for AI adoption.”
Why Neuromorphic Matters for Vision
Unlike GPUs and CPUs, neuromorphic processors such as Loihi operate on brief “spikes” of activity, mimicking the brain’s efficiency. This architecture unlocks several advantages for vision AI:
- Ultra low power consumption, enabling battery constrained platforms like UAVs and mobile robots
- Event driven processing, ideal for pairing with event cameras which also output sparse, spike like data
- Massive parallelism, scalable to increasingly complex vision tasks without linear energy cost growth
The Loihi 2 platform brings a higher level of programmability than earlier neuromorphic designs, including more flexible neuron models and richer spike dynamics. Even with those advantages, however, the team had to design carefully around the processor’s limited precision and instruction set, demonstrating that these chips can already support sophisticated vision algorithms today.
The Road Ahead
Neuromorphic chips are still emerging and carry limitations in precision and programmability. But this breakthrough signals their real world viability for machine vision. As hardware matures, Loihi like processors could power embedded inspection systems, autonomous navigation and industrial robots where both performance and efficiency are essential.
The bigger story is that energy efficiency is no longer a “nice to have.” It is becoming a strategic differentiator for vision technologies. Neuromorphic computing may not replace GPUs tomorrow, but it is laying the foundation for a more sustainable AI future.
This research also opens the door to closer integration of neuromorphic processors with event cameras and other sensor technologies. By aligning brain inspired computing with brain inspired sensing, the vision industry could take a major step toward low power systems that are both fast and resilient in real world environments.
Find out more: Event-driven Robust Fitting on Neuromorphic Hardware by Tam Ngoc-Bang Nguyen, Anh-Dzung Doan, Zhipeng Cai, and Tat-Jun Chin. Available via arXiv.