Command Palette
Search for a command to run...
SelfTalk: A Self-Supervised Commutative Training Diagram to Comprehend 3D Talking Faces
Peng Ziqiao ; Luo Yihao ; Shi Yue ; Xu Hao ; Zhu Xiangyu ; He Jun ; Liu Hongyan ; Fan Zhaoxin

Abstract
Speech-driven 3D face animation technique, extending its applications tovarious multimedia fields. Previous research has generated promising realisticlip movements and facial expressions from audio signals. However, traditionalregression models solely driven by data face several essential problems, suchas difficulties in accessing precise labels and domain gaps between differentmodalities, leading to unsatisfactory results lacking precision and coherence.To enhance the visual accuracy of generated lip movement while reducing thedependence on labeled data, we propose a novel framework SelfTalk, by involvingself-supervision in a cross-modals network system to learn 3D talking faces.The framework constructs a network system consisting of three modules: facialanimator, speech recognizer, and lip-reading interpreter. The core of SelfTalkis a commutative training diagram that facilitates compatible features exchangeamong audio, text, and lip shape, enabling our models to learn the intricateconnection between these factors. The proposed framework leverages theknowledge learned from the lip-reading interpreter to generate more plausiblelip shapes. Extensive experiments and user studies demonstrate that ourproposed approach achieves state-of-the-art performance both qualitatively andquantitatively. We recommend watching the supplementary video.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-face-animation-on-biwi-3d-audiovisual | SelfTalk | FDD: 3.5761 Lip Vertex Error: 4.2485 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.