HyperAIHyperAI

Natural Language Inference On Qnli

Metrics

Accuracy

Results

Performance results of various models on this benchmark

Model Name
Accuracy
Paper TitleRepository
FNet-Large85%FNet: Mixing Tokens with Fourier Transforms-
RealFormer91.89%RealFormer: Transformer Likes Residual Attention-
Nyströmformer88.7%Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention-
Q-BERT (Shen et al., 2020)93.0Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT-
DeBERTaV3large96%DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing-
Charformer-Tall91.0%Charformer: Fast Character Transformers via Gradient-based Subword Tokenization-
ELECTRA95.4%--
SMART-BERT-SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization-
data2vec91.1%data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language-
SpanBERT94.3%SpanBERT: Improving Pre-training by Representing and Predicting Spans-
24hBERT90.6How to Train BERT with an Academic Budget-
ASA + RoBERTa93.6%Adversarial Self-Attention for Language Understanding-
PSQ (Chen et al., 2020)94.5A Statistical Framework for Low-bitwidth Training of Deep Neural Networks-
TRANS-BLSTM94.08%TRANS-BLSTM: Transformer with Bidirectional LSTM for Language Understanding-
T5-Small90.3%Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer-
ASA + BERT-base91.4%Adversarial Self-Attention for Language Understanding-
ALICE99.2%SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization-
T5-Base93.7%Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer-
T5-11B96.7%Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer-
ERNIE91.3%ERNIE: Enhanced Language Representation with Informative Entities-
0 of 43 row(s) selected.
Natural Language Inference On Qnli | SOTA | HyperAI