HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

InconSeg: Residual-Guided Fusion With Inconsistent Multi-Modal Data for Negative and Positive Road Obstacles Segmentation

{and Yuxiang Sun Yueyong Lyu ID David Navarro-Alarcon ID Yanning Guo ID Zhen Feng ID}

Abstract

Segmentation of road obstacles, including negative and positive obstacles, is critical to the safe navigation of autonomous vehicles. Recent methods have witnessed an increasing interest in using multi-modal data fusion (e.g., RGB and depth/disparity images). Although improved segmentation accuracy has been achieved by these methods, we still find that their performance could be easily degraded if the two modalities have inconsistent information, for example, distant obstacles that can be viewed in RGB images but cannot be viewed in depth/disparity images. To address this issue, we propose a novel two-encoder-two-decoder RGB-depth/disparity multi-modal network with Residual-Guided Fusion modules. Different from most existing networks that fuse feature maps in encoders, we fuse feature maps in decoder. We also release a large-scale RGB-depth/disparity dataset recorded in both urban and rural environments with manually-labeled ground truth for both negative- and positive-obstacles segmentation. Extensive experimental results demonstrate thatour network achieves state-of-the-art performance compared with other networks.

Benchmarks

BenchmarkMethodologyMetrics
road-damage-detection-on-npoInconSeg
mIoU: 83.88

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
InconSeg: Residual-Guided Fusion With Inconsistent Multi-Modal Data for Negative and Positive Road Obstacles Segmentation | Papers | HyperAI