HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
语言建模
Language Modelling On Lambada
Language Modelling On Lambada
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Accuracy
Paper Title
Repository
PaLM-540B (Few-Shot)
89.7
PaLM: Scaling Language Modeling with Pathways
PaLM 2-L (one-shot)
86.9
PaLM 2 Technical Report
GPT-3 175B (Few-Shot)
86.4
Language Models are Few-Shot Learners
LLaMA-65B+CFG (Zero-Shot)
84.0
Stay on topic with Classifier-Free Guidance
-
LLaMA-30B+CFG (zero-shot)
83.9
Stay on topic with Classifier-Free Guidance
-
PaLM 2-M (one-shot)
83.7
PaLM 2 Technical Report
Cohere Large
82.33
-
-
LLaMA-13B+CFG (zero-shot)
82.2
Stay on topic with Classifier-Free Guidance
-
PaLM-540B (One-Shot)
81.8
PaLM: Scaling Language Modeling with Pathways
GLaM 62B/64E (One-Shot)
80.9
GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
-
PaLM 2-S (one-shot)
80.7
PaLM 2 Technical Report
GLM-130B (bidirectional attention)
80.2
GLM-130B: An Open Bilingual Pre-trained Model
SparseGPT (175B, 2:4 Sparsity)
79.47
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
SparseGPT (175B, 4:8 Sparsity)
78.77
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
PaLM-540B (Zero-Shot)
77.9
PaLM: Scaling Language Modeling with Pathways
Chinchilla (Zero-Shot)
77.7
Training Compute-Optimal Large Language Models
SparseGPT (175B, 50% Sparsity)
76.51
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
GPT-3 175B (Zero-Shot)
76.2
Language Models are Few-Shot Learners
OPT-175B
75.59
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot
GPT-3 13B (Zero-Shot)
72.5
Language Models are Few-Shot Learners
0 of 37 row(s) selected.
Previous
Next
Language Modelling On Lambada | SOTA | HyperAI超神经