HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
语义解析
Semantic Parsing On Wikitablequestions
Semantic Parsing On Wikitablequestions
评估指标
Accuracy (Dev)
Accuracy (Test)
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Accuracy (Dev)
Accuracy (Test)
Paper Title
Repository
ARTEMIS-DA
-
80.8
ARTEMIS-DA: An Advanced Reasoning and Transformation Engine for Multi-Step Insight Synthesis in Data Analytics
-
TabLaP
/
76.6
Accurate and Regret-aware Numerical Problem Solver for Tabular Question Answering
SynTQA (GPT)
-
74.4
SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA
Mix SC
/
73.6
Rethinking Tabular Data Understanding with Large Language Models
SynTQA (RF)
/
71.6
SynTQA: Synergistic Table-based Question Answering via Mixture of Text-to-SQL and E2E TQA
CABINET
/
69.1
CABINET: Content Relevance based Noise Reduction for Table Question Answering
Chain-of-Table
/
67.31
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding
Tab-PoT
/
66.78
Efficient Prompting for LLM-based Generative Internet of Things
-
Dater
64.8
65.9
Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning
LEVER
64.6
65.8
LEVER: Learning to Verify Language-to-Code Generation with Execution
TabSQLify (col+row)
-
64.7
TabSQLify: Enhancing Reasoning Capabilities of LLMs Through Table Decomposition
Binder
65.0
64.6
Binding Language Models in Symbolic Languages
OmniTab-Large
62.5
63.3
OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering
NormTab (Targeted) + SQL
-
61.20
NormTab: Improving Symbolic Reasoning in LLMs Through Tabular Data Normalization
ReasTAP-Large
59.7
58.7
ReasTAP: Injecting Table Reasoning Skills During Pre-training via Synthetic Reasoning Examples
TAPEX-Large
57.0
57.5
TAPEX: Table Pre-training via Learning a Neural SQL Executor
MAPO + TABERTLarge (K = 3)
52.2
51.8
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
T5-3b(UnifiedSKG)
50.65
49.29
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models
TAPAS-Large (pre-trained on SQA)
/
48.8
TAPAS: Weakly Supervised Table Parsing via Pre-training
Structured Attention
43.7
44.5
Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs
0 of 21 row(s) selected.
Previous
Next