HyperAI超神经
首页
资讯
最新论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
首页
SOTA
Question Answering
Question Answering On Quora Question Pairs
Question Answering On Quora Question Pairs
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Accuracy
Paper Title
Repository
T5-11B
90.4%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
24hBERT
70.7
How to Train BERT with an Academic Budget
MLM+ subs+ del-span
90.3%
CLEAR: Contrastive Learning for Sentence Representation
-
ELECTRA
90.1%
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
RoBERTa (ensemble)
90.2%
RoBERTa: A Robustly Optimized BERT Pretraining Approach
T5-Small
88.0%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
ERNIE 2.0 Large
90.1%
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
BigBird
88.6%
Big Bird: Transformers for Longer Sequences
T5-Base
89.4%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
RE2
89.2 %
Simple and Effective Text Matching with Richer Alignment Features
SqueezeBERT
80.3%
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
DeBERTa (large)
92.3%
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
ALBERT
90.5%
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
XLNet (single model)
92.3%
XLNet: Generalized Autoregressive Pretraining for Language Understanding
SWEM-concat
83.03%
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
T5-3B
89.7%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
T5-Large 770M
89.9%
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DistilBERT 66M
89.2%
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
ERNIE 2.0 Base
89.8%
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
0 of 19 row(s) selected.
Previous
Next