HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
语义文本相似度
Semantic Textual Similarity On Sts Benchmark
Semantic Textual Similarity On Sts Benchmark
评估指标
Spearman Correlation
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Spearman Correlation
Paper Title
Repository
Mnet-Sim
0.931
MNet-Sim: A Multi-layered Semantic Similarity Network to Evaluate Sentence Similarity
-
MT-DNN-SMART
0.925
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
StructBERTRoBERTa ensemble
0.924
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
-
T5-11B
0.921
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
RealFormer
0.8988
RealFormer: Transformer Likes Residual Attention
T5-3B
0.898
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
AnglE-LLaMA-13B
0.8969
AnglE-optimized Text Embeddings
ASA + RoBERTa
0.892
Adversarial Self-Attention for Language Understanding
PromptEOL+CSE+LLaMA-30B
0.8914
Scaling Sentence Embeddings with Large Language Models
AnglE-LLaMA-7B
0.8897
AnglE-optimized Text Embeddings
AnglE-LLaMA-7B-v2
0.8897
AnglE-optimized Text Embeddings
T5-Large 770M
0.886
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
PromptEOL+CSE+OPT-13B
0.8856
Scaling Sentence Embeddings with Large Language Models
PromptEOL+CSE+OPT-2.7B
0.8833
Scaling Sentence Embeddings with Large Language Models
PromCSE-RoBERTa-large (0.355B)
0.8787
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
BigBird
.878
Big Bird: Transformers for Longer Sequences
Trans-Encoder-RoBERTa-large-cross (unsup.)
0.867
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
SimCSE-RoBERTalarge
0.867
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Trans-Encoder-RoBERTa-large-bi (unsup.)
0.8655
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
ASA + BERT-base
0.865
Adversarial Self-Attention for Language Understanding
0 of 66 row(s) selected.
Previous
Next
Semantic Textual Similarity On Sts Benchmark | SOTA | HyperAI超神经