HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Masked Event Modeling: Self-Supervised Pretraining for Event Cameras

Simon Klenk David Bonello Lukas Koestler Nikita Araslanov Daniel Cremers

Masked Event Modeling: Self-Supervised Pretraining for Event Cameras

Abstract

Event cameras asynchronously capture brightness changes with low latency, high temporal resolution, and high dynamic range. However, annotation of event data is a costly and laborious process, which limits the use of deep learning methods for classification and other semantic tasks with the event modality. To reduce the dependency on labeled event data, we introduce Masked Event Modeling (MEM), a self-supervised framework for events. Our method pretrains a neural network on unlabeled events, which can originate from any event camera recording. Subsequently, the pretrained model is finetuned on a downstream task, leading to a consistent improvement of the task accuracy. For example, our method reaches state-of-the-art classification accuracy across three datasets, N-ImageNet, N-Cars, and N-Caltech101, increasing the top-1 accuracy of previous work by significant margins. When tested on real-world event data, MEM is even superior to supervised RGB-based pretraining. The models pretrained with MEM are also label-efficient and generalize well to the dense task of semantic image segmentation.

Code Repositories

tum-vision/mem
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
classification-on-n-carsMEM
Accuracy (%): 98.55
Architecture: Transformer
Representation: Event Histogram

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp