HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
问答
Question Answering On Squad20 Dev
Question Answering On Squad20 Dev
评估指标
EM
F1
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
EM
F1
Paper Title
Repository
XLNet (single model)
87.9
90.6
XLNet: Generalized Autoregressive Pretraining for Language Understanding
XLNet+DSC
87.65
89.51
Dice Loss for Data-imbalanced NLP Tasks
RoBERTa (no data aug)
86.5
89.4
RoBERTa: A Robustly Optimized BERT Pretraining Approach
ALBERT xxlarge
85.1
88.1
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
SG-Net
85.1
87.9
SG-Net: Syntax-Guided Machine Reading Comprehension
SpanBERT
-
86.8
SpanBERT: Improving Pre-training by Representing and Predicting Spans
ALBERT xlarge
83.1
85.9
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
SemBERT large
80.9
83.6
Semantics-aware BERT for Language Understanding
ALBERT large
79.0
82.1
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
ALBERT base
76.1
79.1
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
RMR + ELMo (Model-III)
72.3
74.8
Read + Verify: Machine Reading Comprehension with Unanswerable Questions
-
U-Net
70.3
74.0
U-Net: Machine Reading Comprehension with Unanswerable Questions
TinyBERT-6 67M
69.9
73.4
TinyBERT: Distilling BERT for Natural Language Understanding
0 of 13 row(s) selected.
Previous
Next