Command Palette
Search for a command to run...
Diffusion 3D Features (Diff3F): Decorating Untextured Shapes with Distilled Semantic Features
Dutt Niladri Shekhar ; Muralikrishnan Sanjeev ; Mitra Niloy J.

Abstract
We present Diff3F as a simple, robust, and class-agnostic feature descriptorthat can be computed for untextured input shapes (meshes or point clouds). Ourmethod distills diffusion features from image foundational models onto inputshapes. Specifically, we use the input shapes to produce depth and normal mapsas guidance for conditional image synthesis. In the process, we produce(diffusion) features in 2D that we subsequently lift and aggregate on theoriginal surface. Our key observation is that even if the conditional imagegenerations obtained from multi-view rendering of the input shapes areinconsistent, the associated image features are robust and, hence, can bedirectly aggregated across views. This produces semantic features on the inputshapes, without requiring additional data or training. We perform extensiveexperiments on multiple benchmarks (SHREC'19, SHREC'20, FAUST, and TOSCA) anddemonstrate that our features, being semantic instead of geometric, producereliable correspondence across both isometric and non-isometrically relatedshape families. Code is available via the project page athttps://diff3f.github.io/
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-dense-shape-correspondence-on-shrec-19 | Diffusion 3D Features (Zero-shot) | Accuracy at 1%: 26.4 Euclidean Mean Error (EME): 1.7 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.