HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
问答
Question Answering On Openbookqa
Question Answering On Openbookqa
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Accuracy
Paper Title
Repository
GPT-4 + knowledge base
95.9
-
-
MVP-Tuning (ensemble)
95.2
-
-
PaLM 540B (Self Improvement, Self Consistency)
94.4
Large Language Models Can Self-Improve
-
X-Reasoner
94.2
-
-
PaLM 540B (Self Improvement, CoT Prompting)
93
Large Language Models Can Self-Improve
-
PaLM 540B (Self Improvement, Standard-Prompting)
92
Large Language Models Can Self-Improve
-
DeBERTa-xxlarge 1.5B + MVP-Tuning
91.3
-
-
GrapeQA: PEGA+CANP
90
GrapeQA: GRaph Augmentation and Pruning to Enhance Question-Answering
-
PaLM 540B (Self Consistency)
90
Large Language Models Can Self-Improve
-
GenMC 11B
89.8
Clues Before Answers: Generation-Enhanced Multiple-Choice QA
AristoRoBERTa + MVP-Tuning
87.6
-
-
AristoRoBERTa + Graph Soft Counter
87.4
GNN is a Counter? Revisiting GNN for Question Answering
-
UnifiedQA 11B
87.2
UnifiedQA: Crossing Format Boundaries With a Single QA System
LLaMA-3 8B+MoSLoRA
86.8
Mixture-of-Subspaces in Low-Rank Adaptation
PaLM 540B (CoT Prompting)
86.4
Large Language Models Can Self-Improve
-
LLaMA-3 8B + MixLoRA
84.8
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
PaLM 540B (Standard-Prompting)
84.4
Large Language Models Can Self-Improve
-
TTTTT 3B
83.2
Fusing Context Into Knowledge Graph for Commonsense Question Answering
LLaMA-2 13B + MixLoRA
83
MixLoRA: Enhancing Large Language Models Fine-Tuning with LoRA-based Mixture of Experts
QA-GNN
82.8
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
0 of 45 row(s) selected.
Previous
Next
Question Answering On Openbookqa | SOTA | HyperAI超神经