HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

BiFuse: Monocular 360 Depth Estimation via Bi-Projection Fusion

{ Yi-Hsuan Tsai Wei-Chen Chiu Min Sun Yu-Hsuan Yeh Fu-En Wang}

BiFuse: Monocular 360 Depth Estimation via Bi-Projection Fusion

Abstract

Depth estimation from a monocular 360 image is an emerging problem that gains popularity due to the availability of consumer-level 360 cameras and the complete surrounding sensing capability. While the standard of 360 imaging is under rapid development, we propose to predict the depth map of a monocular 360 image by mimicking both peripheral and foveal vision of the human eye. To this end, we adopt a two-branch neural network leveraging two common projections: equirectangular and cubemap projections. In particular, equirectangular projection incorporates a complete field-of-view but introduces distortion, whereas cubemap projection avoids distortion but introduces discontinuity at the boundary of the cube. Thus we propose a bi-projection fusion scheme along with learnable masks to balance the feature map from the two projections. Moreover, for the cubemap projection, we propose a spherical padding procedure which mitigates discontinuity at the boundary of each face. We apply our method to four panorama datasets and show favorable results against the existing state-of-the-art methods.

Benchmarks

BenchmarkMethodologyMetrics
depth-estimation-on-stanford2d3d-panoramicBiFuse with fusion
RMSE: 0.4142
absolute relative error: 0.1209

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
BiFuse: Monocular 360 Depth Estimation via Bi-Projection Fusion | Papers | HyperAI