HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
自然语言推理
Natural Language Inference On Snli
Natural Language Inference On Snli
评估指标
% Test Accuracy
% Train Accuracy
Parameters
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
% Test Accuracy
% Train Accuracy
Parameters
Paper Title
Repository
UnitedSynT5 (3B)
94.7
-
-
First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI
-
UnitedSynT5 (335M)
93.5
-
-
First Train to Generate, then Generate to Train: UnitedSynT5 for Few-Shot NLI
-
Neural Tree Indexers for Text Understanding
93.1
-
355
Entailment as Few-Shot Learner
EFL (Entailment as Few-shot Learner) + RoBERTa-large
93.1
?
355m
Entailment as Few-Shot Learner
RoBERTa-large + self-explaining layer
92.3
?
355m+
Self-Explaining Structures Improve NLP Models
RoBERTa-large+Self-Explaining
92.3
-
340
Self-Explaining Structures Improve NLP Models
CA-MTL
92.1
92.6
340m
Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data
SemBERT
91.9
94.4
339m
Semantics-aware BERT for Language Understanding
MT-DNN-SMARTLARGEv0
91.7
-
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
MT-DNN
91.6
97.2
330m
Multi-Task Deep Neural Networks for Natural Language Understanding
SJRC (BERT-Large +SRL)
91.3
95.7
308m
Explicit Contextual Semantics for Text Comprehension
-
Ntumpha
90.5
99.1
220
Multi-Task Deep Neural Networks for Natural Language Understanding
Densely-Connected Recurrent and Co-Attentive Network Ensemble
90.1
95.0
53.3m
Semantic Sentence Matching with Densely-connected Recurrent and Co-attentive Information
-
MFAE
90.07
93.18
-
What Do Questions Exactly Ask? MFAE: Duplicate Question Identification with Multi-Fusion Asking Emphasis
-
Fine-Tuned LM-Pretrained Transformer
89.9
96.6
85m
Improving Language Understanding by Generative Pre-Training
-
300D DMAN Ensemble
89.6
96.1
79m
-
-
300D DMAN Ensemble
89.6
96.1
79m
Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference
150D Multiway Attention Network Ensemble
89.4
95.5
58m
Multiway Attention Networks for Modeling Sentence Pairs
-
ESIM + ELMo Ensemble
89.3
92.1
40m
Deep contextualized word representations
450D DR-BiLSTM Ensemble
89.3
94.8
45m
DR-BiLSTM: Dependent Reading Bidirectional LSTM for Natural Language Inference
-
0 of 98 row(s) selected.
Previous
Next