Neuromorphic Vision Advances

The field of neuromorphic vision is moving towards leveraging the unique properties of event-based cameras to improve various applications such as optical flow estimation, spectral sensing, and depth perception. Researchers are actively exploring innovative algorithms and hardware designs to effectively process the sparse data format of event cameras, enabling potential improvements in areas like robotics, autonomous vehicles, and surveillance. Notably, bio-inspired approaches are being investigated to achieve colour vision with monochrome event cameras, while others are developing novel pre-training frameworks for RGB-Event perception. Furthermore, event-enhanced methods are being proposed to tackle complex tasks like blurry video super-resolution. Noteworthy papers include: Perturbed State Space Feature Encoders for Optical Flow with Event Cameras, which proposes a novel method for optical flow estimation. Seeing like a Cephalopod: Colour Vision with a Monochrome Event Camera, which presents a bio-inspired approach to achieve spectral sensing. CM3AE: A Unified RGB Frame and Event-Voxel/-Frame Pre-training Framework, which introduces a novel pre-training framework for multi-modal fusion scenarios.

Sources

Hardware, Algorithms, and Applications of the Neuromorphic Vision Sensor: a Review

Perturbed State Space Feature Encoders for Optical Flow with Event Cameras

Seeing like a Cephalopod: Colour Vision with a Monochrome Event Camera

Focal Split: Untethered Snapshot Depth from Differential Defocus

Event Quality Score (EQS): Assessing the Realism of Simulated Event Camera Streams via Distances in Latent Space

CM3AE: A Unified RGB Frame and Event-Voxel/-Frame Pre-training Framework

Event-Enhanced Blurry Video Super-Resolution

Built with on top of