Command Palette
Search for a command to run...
Julian Martin Eisenschlos; Sebastian Ruder; Piotr Czapla; Marcin Kardas; Sylvain Gugger; Jeremy Howard

Abstract
Pretrained language models are promising particularly for low-resource languages as they only require unlabelled data. However, training existing models requires huge amounts of compute, while pretrained cross-lingual models often underperform on low-resource languages. We propose Multi-lingual language model Fine-Tuning (MultiFiT) to enable practitioners to train and fine-tune language models efficiently in their own language. In addition, we propose a zero-shot method using an existing pretrained cross-lingual model. We evaluate our methods on two widely used cross-lingual classification datasets where they outperform models pretrained on orders of magnitude more data and compute. We release all models and code.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| cross-lingual-document-classification-on | MultiFiT, pseudo | Accuracy: 91.62% |
| cross-lingual-document-classification-on-1 | MultiFiT, pseudo | Accuracy: 79.1 |
| cross-lingual-document-classification-on-10 | MultiFiT, pseudo | Accuracy: 76.02 |
| cross-lingual-document-classification-on-11 | MultiFiT, pseudo | Accuracy: 69.57 |
| cross-lingual-document-classification-on-2 | MultiFiT, pseudo | Accuracy: 89.42 |
| cross-lingual-document-classification-on-8 | MultiFiT, pseudo | Accuracy: 82.48 |
| cross-lingual-document-classification-on-9 | MultiFiT, pseudo | Accuracy: 67.83 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.