HyperAI超神经

Question Answering On Quora Question Pairs

评估指标

Accuracy

评测结果

各个模型在此基准测试上的表现结果

模型名称
Accuracy
Paper TitleRepository
T5-11B90.4%Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
24hBERT70.7How to Train BERT with an Academic Budget
MLM+ subs+ del-span90.3%CLEAR: Contrastive Learning for Sentence Representation-
ELECTRA90.1%ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
RoBERTa (ensemble)90.2%RoBERTa: A Robustly Optimized BERT Pretraining Approach
T5-Small88.0%Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
ERNIE 2.0 Large90.1%ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
BigBird88.6%Big Bird: Transformers for Longer Sequences
T5-Base89.4%Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
RE289.2 %Simple and Effective Text Matching with Richer Alignment Features
SqueezeBERT80.3%SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
DeBERTa (large)92.3%DeBERTa: Decoding-enhanced BERT with Disentangled Attention
ALBERT90.5%ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
XLNet (single model)92.3%XLNet: Generalized Autoregressive Pretraining for Language Understanding
SWEM-concat83.03%Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
T5-3B89.7%Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
T5-Large 770M89.9%Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DistilBERT 66M89.2%DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
ERNIE 2.0 Base89.8%ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
0 of 19 row(s) selected.