HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
语义文本相似度
Semantic Textual Similarity On Mrpc
Semantic Textual Similarity On Mrpc
评估指标
F1
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
F1
Paper Title
Repository
BigBird
91.5
Big Bird: Transformers for Longer Sequences
T5-3B
92.5
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
MobileBERT
-
MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices
BERT-Base
-
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning
Charformer-Tall
91.4
Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
RoBERTa-large 355M + Entailment as Few-shot Learner
91.0
Entailment as Few-Shot Learner
Nyströmformer
88.1%
Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention
SMART-BERT
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
RoBERTa-large 355M (MLP quantized vector-wise, fine-tuned)
-
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale
FNet-Large
-
FNet: Mixing Tokens with Fourier Transforms
SqueezeBERT
-
SqueezeBERT: What can computer vision teach NLP about efficient neural networks?
XLNet (single model)
-
XLNet: Generalized Autoregressive Pretraining for Language Understanding
SMART
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
T5-Large
92.4
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
TinyBERT-6 67M
-
TinyBERT: Distilling BERT for Natural Language Understanding
TinyBERT-4 14.5M
-
TinyBERT: Distilling BERT for Natural Language Understanding
DistilBERT 66M
-
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
ERNIE 2.0 Base
-
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding
T5-Small
89.7
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Q8BERT (Zafrir et al., 2019)
-
Q8BERT: Quantized 8Bit BERT
0 of 45 row(s) selected.
Previous
Next