Command Palette
Search for a command to run...
Garnett Noa ; Cohen Rafi ; Pe'er Tomer ; Lahav Roee ; Levi Dan

Abstract
We introduce a network that directly predicts the 3D layout of lanes in aroad scene from a single image. This work marks a first attempt to address thistask with on-board sensing without assuming a known constant lane width orrelying on pre-mapped environments. Our network architecture, 3D-LaneNet,applies two new concepts: intra-network inverse-perspective mapping (IPM) andanchor-based lane representation. The intra-network IPM projection facilitatesa dual-representation information flow in both regular image-view and top-view.An anchor-per-column output representation enables our end-to-end approachwhich replaces common heuristics such as clustering and outlier rejection,casting lane estimation as an object detection problem. In addition, ourapproach explicitly handles complex situations such as lane merges and splits.Results are shown on two new 3D lane datasets, a synthetic and a real one. Forcomparison with existing methods, we test our approach on the image-onlytuSimple lane detection benchmark, achieving performance competitive withstate-of-the-art.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-lane-detection-on-apollo-synthetic-3d-lane | 3D-LaneNet | F1: 86.4 X error far: 0.477 X error near: 0.068 Z error far: 0.202 Z error near: 0.015 |
| 3d-lane-detection-on-openlane | 3D-LaneNet | Curve: 46.5 Extreme Weather: 47.5 F1 (all): 44.1 FPS (pytorch): - Intersection: 32.1 Merge u0026 Split: 41.7 Night: 41.5 Up u0026 Down: 40.8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.