Command Palette
Search for a command to run...
Moon Gyeongsik ; Nam Hyeongjin ; Shiratori Takaaki ; Lee Kyoung Mu

Abstract
Although much progress has been made in 3D clothed human reconstruction, mostof the existing methods fail to produce robust results from in-the-wild images,which contain diverse human poses and appearances. This is mainly due to thelarge domain gap between training datasets and in-the-wild datasets. Thetraining datasets are usually synthetic ones, which contain rendered imagesfrom GT 3D scans. However, such datasets contain simple human poses and lessnatural image appearances compared to those of real in-the-wild datasets, whichmakes generalization of it to in-the-wild images extremely challenging. Toresolve this issue, in this work, we propose ClothWild, a 3D clothed humanreconstruction framework that firstly addresses the robustness on in-thewildimages. First, for the robustness to the domain gap, we propose a weaklysupervised pipeline that is trainable with 2D supervision targets ofin-the-wild datasets. Second, we design a DensePose-based loss function toreduce ambiguities of the weak supervision. Extensive empirical tests onseveral public in-the-wild datasets demonstrate that our proposed ClothWildproduces much more accurate and robust results than the state-of-the-artmethods. The codes are available in here:https://github.com/hygenie1228/ClothWild_RELEASE.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| garment-reconstruction-on-4d-dress | ClothWild_Upper | Chamfer (cm): 3.279 IOU: 0.533 |
| garment-reconstruction-on-4d-dress | ClothWild_Lower | Chamfer (cm): 2.690 IOU: 0.582 |
| garment-reconstruction-on-4d-dress | ClothWild_Outer | Chamfer (cm): 4.163 IOU: 0.588 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.