HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
语义文本相似度
Semantic Textual Similarity On Sts14
Semantic Textual Similarity On Sts14
评估指标
Spearman Correlation
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Spearman Correlation
Paper Title
Repository
AnglE-LLaMA-13B
0.8689
AnglE-optimized Text Embeddings
PromptEOL+CSE+LLaMA-30B
0.8585
Scaling Sentence Embeddings with Large Language Models
AnglE-LLaMA-7B-v2
0.8579
AnglE-optimized Text Embeddings
AnglE-LLaMA-7B
0.8549
AnglE-optimized Text Embeddings
PromptEOL+CSE+OPT-13B
0.8534
Scaling Sentence Embeddings with Large Language Models
PromptEOL+CSE+OPT-2.7B
0.8480
Scaling Sentence Embeddings with Large Language Models
PromCSE-RoBERTa-large (0.355B)
0.8381
Improved Universal Sentence Embeddings with Prompt-based Contrastive Learning and Energy-based Learning
SimCSE-RoBERTalarge
0.8236
SimCSE: Simple Contrastive Learning of Sentence Embeddings
Trans-Encoder-RoBERTa-large-cross (unsup.)
0.8194
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-RoBERTa-large-bi (unsup.)
0.8176
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-BERT-large-bi (unsup.)
0.8137
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-RoBERTa-base-cross (unsup.)
0.7903
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
Trans-Encoder-BERT-base-bi (unsup.)
0.779
Trans-Encoder: Unsupervised sentence-pair modelling through self- and mutual-distillations
DiffCSE-BERT-base
0.7647
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
DiffCSE-RoBERTa-base
0.7549
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
SBERT-NLI-large
0.7490000000000001
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Mirror-RoBERTa-base (unsup.)
0.732
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
Mirror-BERT-base (unsup.)
0.713
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders
Dino (STSb/̄
0.7125
Generating Datasets with Pretrained Language Models
BERTlarge-flow (target)
0.6942
On the Sentence Embeddings from Pre-trained Language Models
0 of 21 row(s) selected.
Previous
Next