HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
常识推理
Common Sense Reasoning On Arc Easy
Common Sense Reasoning On Arc Easy
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Accuracy
Paper Title
Repository
ST-MoE-32B 269B (fine-tuned)
95.2
ST-MoE: Designing Stable and Transferable Sparse Expert Models
LLaMA 3 8B+MoSLoRA (fine-tuned)
90.5
Mixture-of-Subspaces in Low-Rank Adaptation
PaLM 2-L (1-shot)
89.7
PaLM 2 Technical Report
PaLM 2-M (1-shot)
88.0
PaLM 2 Technical Report
LLaMA-3 8B + MixLoRA
86.5
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
Camelidae-8×34B
86.2
Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks
PaLM 2-S (1-shot)
85.6
PaLM 2 Technical Report
LLaMA 65B + CFG (0-shot)
84.2
Stay on topic with Classifier-Free Guidance
-
GAL 120B (0-shot)
83.8
Galactica: A Large Language Model for Science
LLaMA-2 13B + MixLoRA
83.5
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
LLaMA 30B + CFG (0-shot)
83.2
Stay on topic with Classifier-Free Guidance
-
Mixtral 8x7B (0-shot)
83.1
Mixtral of Experts
FLAN 137B (few-shot, k=14)
80.7
Finetuned Language Models Are Zero-Shot Learners
Mistral 7B (0-shot)
80.5
Mixtral of Experts
LLaMA 33B (0-shot)
80.0
LLaMA: Open and Efficient Foundation Language Models
Mistral 7B (0-shot)
80.0
Mistral 7B
FLAN 137B (0-shot)
79.6
Finetuned Language Models Are Zero-Shot Learners
LLaMA 13B + CFG (0-shot)
79.1
Stay on topic with Classifier-Free Guidance
-
LLaMA 65B (0-shot)
78.9
LLaMA: Open and Efficient Foundation Language Models
LLaMA-2 7B + MixLoRA
77.7
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
0 of 47 row(s) selected.
Previous
Next
Common Sense Reasoning On Arc Easy | SOTA | HyperAI超神经