Command Palette
Search for a command to run...
360$^\circ$ from a Single Camera: A Few-Shot Approach for LiDAR Segmentation
Laurenz Reichardt Nikolas Ebert Oliver Wasenmüller

Abstract
Deep learning applications on LiDAR data suffer from a strong domain gap when applied to different sensors or tasks. In order for these methods to obtain similar accuracy on different data in comparison to values reported on public benchmarks, a large scale annotated dataset is necessary. However, in practical applications labeled data is costly and time consuming to obtain. Such factors have triggered various research in label-efficient methods, but a large gap remains to their fully-supervised counterparts. Thus, we propose ImageTo360, an effective and streamlined few-shot approach to label-efficient LiDAR segmentation. Our method utilizes an image teacher network to generate semantic predictions for LiDAR data within a single camera view. The teacher is used to pretrain the LiDAR segmentation student network, prior to optional fine-tuning on 360$^\circ$ data. Our method is implemented in a modular manner on the point level and as such is generalizable to different architectures. We improve over the current state-of-the-art results for label-efficient methods and even surpass some traditional fully-supervised segmentation networks.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| semi-supervised-semantic-segmentation-on-24 | 360° from a Single Camera: A Few-Shot Approach for LiDAR Segmentation (All) | mIOU (1% Test set): 57.7 mIoU (1% Labels): 59.5 mIoU (10% Labels): 62.4 mIoU (20% Labels): 64.2 mIoU (50% Labels): 66.1 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.