HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
Question Answering
Question Answering On Squad20 Dev
Question Answering On Squad20 Dev
评估指标
EM
F1
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
EM
F1
Paper Title
Repository
ALBERT base
76.1
79.1
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
RoBERTa (no data aug)
86.5
89.4
RoBERTa: A Robustly Optimized BERT Pretraining Approach
ALBERT large
79.0
82.1
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
XLNet (single model)
87.9
90.6
XLNet: Generalized Autoregressive Pretraining for Language Understanding
RMR + ELMo (Model-III)
72.3
74.8
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
-
SemBERT large
80.9
83.6
Semantics-aware BERT for Language Understanding
SpanBERT
-
86.8
SpanBERT: Improving Pre-training by Representing and Predicting Spans
SG-Net
85.1
87.9
SG-Net: Syntax-Guided Machine Reading Comprehension
TinyBERT-6 67M
69.9
73.4
TinyBERT: Distilling BERT for Natural Language Understanding
XLNet+DSC
87.65
89.51
Dice Loss for Data-imbalanced NLP Tasks
ALBERT xlarge
83.1
85.9
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
U-Net
70.3
74.0
U-Net: Machine Reading Comprehension with Unanswerable Questions
ALBERT xxlarge
85.1
88.1
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
0 of 13 row(s) selected.
Previous
Next