HyperAIHyperAI

Command Palette

Search for a command to run...

a month ago

MarrNet: 3D Shape Reconstruction via 2.5D Sketches

MarrNet: 3D Shape Reconstruction via 2.5D Sketches

Abstract

3D object reconstruction from a single image is a highly under-determinedproblem, requiring strong prior knowledge of plausible 3D shapes. Thisintroduces challenges for learning-based approaches, as 3D object annotationsare scarce in real images. Previous work chose to train on synthetic data withground truth 3D information, but suffered from domain adaptation when tested onreal data. In this work, we propose MarrNet, an end-to-end trainable model thatsequentially estimates 2.5D sketches and 3D object shape. Our disentangled,two-step formulation has three advantages. First, compared to full 3D shape,2.5D sketches are much easier to be recovered from a 2D image; models thatrecover 2.5D sketches are also more likely to transfer from synthetic to realdata. Second, for 3D reconstruction from 2.5D sketches, systems can learnpurely from synthetic data. This is because we can easily render realistic 2.5Dsketches without modeling object appearance variations in real images,including lighting, texture, etc. This further relieves the domain adaptationproblem. Third, we derive differentiable projective functions from 3D shape to2.5D sketches; the framework is therefore end-to-end trainable on real images,requiring no human annotations. Our model achieves state-of-the-art performanceon 3D shape reconstruction.

Benchmarks

BenchmarkMethodologyMetrics
3d-shape-retrieval-on-pix3dMarrNet
R@1: 0.42
R@16: 0.71
R@2: 0.51
R@32: 0.78
R@4: 0.57
R@8: 0.64

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp