Natural Language Inference On Multinli Dev
Metrics
Matched
Mismatched
Results
Performance results of various models on this benchmark
Model Name | Matched | Mismatched | Paper Title | Repository |
---|---|---|---|---|
DistilBERT-uncased-PruneOFA (90% unstruct sparse, QAT Int8) | 78.8 | 80.4 | Prune Once for All: Sparse Pre-Trained Language Models | |
BERT-Base-uncased-PruneOFA (85% unstruct sparse, QAT Int8) | 81.4 | 82.51 | Prune Once for All: Sparse Pre-Trained Language Models | |
BERT-Base-uncased-PruneOFA (85% unstruct sparse) | 82.71 | 83.67 | Prune Once for All: Sparse Pre-Trained Language Models | |
DistilBERT-uncased-PruneOFA (90% unstruct sparse) | 80.68 | 81.47 | Prune Once for All: Sparse Pre-Trained Language Models | |
DistilBERT-uncased-PruneOFA (85% unstruct sparse, QAT Int8) | 80.66 | 81.14 | Prune Once for All: Sparse Pre-Trained Language Models | |
BERT-Large-uncased-PruneOFA (90% unstruct sparse) | 83.74 | 84.2 | Prune Once for All: Sparse Pre-Trained Language Models | |
BERT-Base-uncased-PruneOFA (90% unstruct sparse) | 81.45 | 82.43 | Prune Once for All: Sparse Pre-Trained Language Models | |
TinyBERT-6 67M | 84.5 | 84.5 | TinyBERT: Distilling BERT for Natural Language Understanding | |
BERT-Large-uncased-PruneOFA (90% unstruct sparse, QAT Int8) | 83.47 | 84.08 | Prune Once for All: Sparse Pre-Trained Language Models | |
DistilBERT-uncased-PruneOFA (85% unstruct sparse) | 81.35 | 82.03 | Prune Once for All: Sparse Pre-Trained Language Models |
0 of 10 row(s) selected.