HyperAI

Natural Language Inference On Wnli

Metrics

Accuracy

Results

Performance results of various models on this benchmark

Model Name
Accuracy
Paper TitleRepository
ALBERT91.8ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
HNNensemble89A Hybrid Neural Network Model for Commonsense Reasoning
StructBERTRoBERTa ensemble89.7StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding-
SqueezeBERT65.1SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
XLNet92.5XLNet: Generalized Autoregressive Pretraining for Language Understanding
T5-Base 220M78.8Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
BERT-large 340M (fine-tuned on WSCR)71.9A Surprisingly Robust Trick for Winograd Schema Challenge
RoBERTa (ensemble)89RoBERTa: A Robustly Optimized BERT Pretraining Approach
HNN83.6A Hybrid Neural Network Model for Commonsense Reasoning
DistilBERT 66M44.4DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
FLAN 137B (few-shot, k=4)70.4Finetuned Language Models Are Zero-Shot Learners
ERNIE 2.0 Large67.8ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
T5-Large 770M85.6Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
BERTwiki 340M (fine-tuned on WSCR)74.7A Surprisingly Robust Trick for Winograd Schema Challenge
FLAN 137B (zero-shot)74.6Finetuned Language Models Are Zero-Shot Learners
T5-XL 3B89.7Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DeBERTa94.5DeBERTa: Decoding-enhanced BERT with Disentangled Attention
RWKV-4-Raven-14B49.3RWKV: Reinventing RNNs for the Transformer Era
Turing NLR v5 XXL 5.4B (fine-tuned)95.9--
T5-Small 60M69.2Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
0 of 23 row(s) selected.