HyperAI
Home
News
Latest Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
English
HyperAI
Toggle sidebar
Search the site…
⌘
K
Home
SOTA
Natural Language Inference
Natural Language Inference On Scitail
Natural Language Inference On Scitail
Metrics
Dev Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Dev Accuracy
Paper Title
Repository
MT-DNN-SMART_1%ofTrainingData
88.6
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Finetuned Transformer LM
-
Improving Language Understanding by Generative Pre-Training
RE2
-
Simple and Effective Text Matching with Richer Alignment Features
MT-DNN-SMARTLARGEv0
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
SplitEE-S
-
SplitEE: Early Exit in Deep Neural Networks with Split Computing
CA-MTL
-
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
Hierarchical BiLSTM Max Pooling
-
Sentence Embeddings in NLI with Iterative Refinement Encoders
MT-DNN
-
Multi-Task Deep Neural Networks for Natural Language Understanding
MT-DNN-SMART_0.1%ofTrainingData
82.3
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN-SMART_100%ofTrainingData
96.1
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
CAFE
-
Compare, Compress and Propagate: Enhancing Neural Architectures with Alignment Factorization for Natural Language Inference
-
MT-DNN-SMART_10%ofTrainingData
91.3
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Finetuned Transformer LM
-
-
-
0 of 13 row(s) selected.
Previous
Next