Command Palette
Search for a command to run...
Subham Sekhar Sahoo Marianne Arriola Yair Schiff Aaron Gokaslan Edgar Marroquin Justin T Chiu Alexander Rush Volodymyr Kuleshov

Abstract
While diffusion models excel at generating high-quality images, prior workreports a significant performance gap between diffusion and autoregressive (AR)methods in language modeling. In this work, we show that simple masked discretediffusion is more performant than previously thought. We apply an effectivetraining recipe that improves the performance of masked diffusion models andderive a simplified, Rao-Blackwellized objective that results in additionalimprovements. Our objective has a simple form -- it is a mixture of classicalmasked language modeling losses -- and can be used to train encoder-onlylanguage models that admit efficient samplers, including ones that can generatearbitrary lengths of text semi-autoregressively like a traditional languagemodel. On language modeling benchmarks, a range of masked diffusion modelstrained with modern engineering practices achieves a new state-of-the-art amongdiffusion models, and approaches AR perplexity. We release our code at:https://github.com/kuleshov-group/mdlm
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| language-modelling-on-one-billion-word | MDLM (AR baseline) | Number of params: 110M PPL: 20.09 |
| language-modelling-on-one-billion-word | MDLM | Number of params: 110M PPL: 23.00 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.