HyperAI

Natural Language Inference On Multinli Dev

Metrics

Matched
Mismatched

Results

Performance results of various models on this benchmark

Model Name
Matched
Mismatched
Paper TitleRepository
DistilBERT-uncased-PruneOFA (90% unstruct sparse, QAT Int8)78.880.4Prune Once for All: Sparse Pre-Trained Language Models
BERT-Base-uncased-PruneOFA (85% unstruct sparse, QAT Int8)81.482.51Prune Once for All: Sparse Pre-Trained Language Models
BERT-Base-uncased-PruneOFA (85% unstruct sparse)82.7183.67Prune Once for All: Sparse Pre-Trained Language Models
DistilBERT-uncased-PruneOFA (90% unstruct sparse)80.6881.47Prune Once for All: Sparse Pre-Trained Language Models
DistilBERT-uncased-PruneOFA (85% unstruct sparse, QAT Int8)80.6681.14Prune Once for All: Sparse Pre-Trained Language Models
BERT-Large-uncased-PruneOFA (90% unstruct sparse)83.7484.2Prune Once for All: Sparse Pre-Trained Language Models
BERT-Base-uncased-PruneOFA (90% unstruct sparse)81.4582.43Prune Once for All: Sparse Pre-Trained Language Models
TinyBERT-6 67M84.584.5TinyBERT: Distilling BERT for Natural Language Understanding
BERT-Large-uncased-PruneOFA (90% unstruct sparse, QAT Int8)83.4784.08Prune Once for All: Sparse Pre-Trained Language Models
DistilBERT-uncased-PruneOFA (85% unstruct sparse)81.3582.03Prune Once for All: Sparse Pre-Trained Language Models
0 of 10 row(s) selected.