Command Palette
Search for a command to run...
VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion
Li Yiming ; Yu Zhiding ; Choy Christopher ; Xiao Chaowei ; Alvarez Jose M. ; Fidler Sanja ; Feng Chen ; Anandkumar Anima

Abstract
Humans can easily imagine the complete 3D geometry of occluded objects andscenes. This appealing ability is vital for recognition and understanding. Toenable such capability in AI systems, we propose VoxFormer, a Transformer-basedsemantic scene completion framework that can output complete 3D volumetricsemantics from only 2D images. Our framework adopts a two-stage design where westart from a sparse set of visible and occupied voxel queries from depthestimation, followed by a densification stage that generates dense 3D voxelsfrom the sparse ones. A key idea of this design is that the visual features on2D images correspond only to the visible scene structures rather than theoccluded or empty spaces. Therefore, starting with the featurization andprediction of the visible structures is more reliable. Once we obtain the setof sparse queries, we apply a masked autoencoder design to propagate theinformation to all the voxels by self-attention. Experiments on SemanticKITTIshow that VoxFormer outperforms the state of the art with a relativeimprovement of 20.0% in geometry and 18.1% in semantics and reduces GPU memoryduring training to less than 16GB. Our code is available onhttps://github.com/NVlabs/VoxFormer.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-semantic-scene-completion-from-a-single-1 | VoxFormer | mIoU: 12.20 |
| 3d-semantic-scene-completion-from-a-single-2 | VoxFormer | mIoU: 11.91 |
| 3d-semantic-scene-completion-on-kitti-360 | VoxFormer | mIoU: 11.91 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.