Command Palette
Search for a command to run...
Hongyi Zhang; Moustapha Cisse; Yann N. Dauphin; David Lopez-Paz

Abstract
Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| domain-generalization-on-imagenet-a | Mixup (ResNet-50) | Top-1 accuracy %: 6.6 |
| image-classification-on-cifar-10 | DenseNet-BC-190 + Mixup | Percentage correct: 97.3 |
| image-classification-on-cifar-100 | DenseNet-BC-190 + Mixup | Percentage correct: 83.20 |
| image-classification-on-kuzushiji-mnist | PreActResNet-18 + Input Mixup | Accuracy: 98.41 |
| semi-supervised-image-classification-on-cifar-6 | MixUp | Percentage error: 47.43 |
| semi-supervised-image-classification-on-svhn-1 | MixUp | Accuracy: 60.03 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.