Command Palette
Search for a command to run...
Masked Generative Video-to-Audio Transformers with Enhanced Synchronicity
Santiago Pascual Chunghsin Yeh Ioannis Tsiamas Joan Serrà

Abstract
Video-to-audio (V2A) generation leverages visual-only video features torender plausible sounds that match the scene. Importantly, the generated soundonsets should match the visual actions that are aligned with them, otherwiseunnatural synchronization artifacts arise. Recent works have explored theprogression of conditioning sound generators on still images and then videofeatures, focusing on quality and semantic matching while ignoringsynchronization, or by sacrificing some amount of quality to focus on improvingsynchronization only. In this work, we propose a V2A generative model, namedMaskVAT, that interconnects a full-band high-quality general audio codec with asequence-to-sequence masked generative model. This combination allows modelingboth high audio quality, semantic matching, and temporal synchronicity at thesame time. Our results show that, by combining a high-quality codec with theproper pre-trained audio-visual features and a sequence-to-sequence parallelstructure, we are able to yield highly synchronized results on one hand, whilstbeing competitive with the state of the art of non-codec generative audiomodels. Sample videos and generated audios are available athttps://maskvat.github.io .
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-to-sound-generation-on-vgg-sound | MaskVAT_Hybrid | FAD: 2.04 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.