HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

Abhinav Valada; Rohit Mohan; Wolfram Burgard

Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

Abstract

Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed self-supervised model adaptation fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. In addition, we propose a computationally efficient unimodal segmentation architecture termed AdapNet++ that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling that has a larger effective receptive field with more than 10x fewer parameters, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on several benchmarks demonstrate that both our unimodal and multimodal architectures achieve state-of-the-art performance.

Code Repositories

DeepSceneSeg/SSMA
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
scene-recognition-on-scannetSSMA
Average Recall: 54.28
semantic-segmentation-on-cityscapesAdapNet++
Mean IoU (class): 81.24%
semantic-segmentation-on-cityscapesSSMA
Mean IoU (class): 82.3%
semantic-segmentation-on-freiburg-forestAdapNet++
Mean IoU: 83.09
semantic-segmentation-on-freiburg-forestSSMA
Mean IoU: 84.18
semantic-segmentation-on-scannetv2AdapNet++
Mean IoU: 50.3
semantic-segmentation-on-scannetv2SSMA
Mean IoU: 57.7
semantic-segmentation-on-sun-rgbdDPLNet
Mean IoU: 38.4
semantic-segmentation-on-sun-rgbdTokenFusion (S)
Mean IoU: 45.73
semantic-segmentation-on-synthia-cvpr16AdapNet++
Mean IoU: 87.87
semantic-segmentation-on-synthia-cvpr16SSMA
Mean IoU: 92.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp