HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
词义消歧
Word Sense Disambiguation On Words In Context
Word Sense Disambiguation On Words In Context
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Accuracy
Paper Title
Repository
COSINE + Transductive Learning
85.3
Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach
PaLM 540B (finetuned)
78.8
PaLM: Scaling Language Modeling with Pathways
ST-MoE-32B 269B (fine-tuned)
77.7
ST-MoE: Designing Stable and Transferable Sparse Expert Models
DeBERTa-Ensemble
77.5
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Vega v2 6B (fine-tuned)
77.4
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
-
UL2 20B (fine-tuned)
77.3
UL2: Unifying Language Learning Paradigms
Turing NLR v5 XXL 5.4B (fine-tuned)
77.1
Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE
-
T5-XXL 11B
76.9
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
DeBERTa-1.5B
76.4
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
ST-MoE-L 4.1B (fine-tuned)
74
ST-MoE: Designing Stable and Transferable Sparse Expert Models
SenseBERT-large 340M
72.1
SenseBERT: Driving Some Sense into BERT
-
SenseBERT-base 110M
70.3
SenseBERT: Driving Some Sense into BERT
-
PaLM 2-L (one-shot)
66.8
PaLM 2 Technical Report
BERT-large 340M
65.5
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
-
FLAN-T5-Large 783M
64.7
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
LaMini-F-T5 783M
63.8
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Context2vec
59.3
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
-
DeConf
58.7
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
-
SW2V
58.1
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
-
ElMo
57.7
WiC: the Word-in-Context Dataset for Evaluating Context-Sensitive Meaning Representations
-
0 of 37 row(s) selected.
Previous
Next
Word Sense Disambiguation On Words In Context | SOTA | HyperAI超神经