Command Palette
Search for a command to run...
Luo Yueru ; Yan Xu ; Zheng Chaoda ; Zheng Chao ; Mei Shuqi ; Kun Tang ; Cui Shuguang ; Li Zhen

Abstract
Estimating accurate lane lines in 3D space remains challenging due to theirsparse and slim nature. Previous works mainly focused on using images for 3Dlane detection, leading to inherent projection error and loss of geometryinformation. To address these issues, we explore the potential of leveragingLiDAR for 3D lane detection, either as a standalone method or in combinationwith existing monocular approaches. In this paper, we propose M$^2$-3DLaneNetto integrate complementary information from multiple sensors. Specifically,M$^2$-3DLaneNet lifts 2D features into 3D space by incorporating geometryinformation from LiDAR data through depth completion. Subsequently, the lifted2D features are further enhanced with LiDAR features through cross-modality BEVfusion. Extensive experiments on the large-scale OpenLane dataset demonstratethe effectiveness of M$^2$-3DLaneNet, regardless of the range (75m or 100m).
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-lane-detection-on-openlane | M^2-3DLaneNet (Camera + Lidar) | Curve: 60.7 Extreme Weather: 56.2 F1 (all): 55.5 FPS (pytorch): - Intersection: 43.8 Merge u0026 Split: 51.4 Night: 51.6 Up u0026 Down: 53.4 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.