HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
服务条款
隐私政策
中文
HyperAI
HyperAI
Toggle Sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
Console
Sign In
首页
SOTA
数据到文本生成
Data To Text Generation On Webnlg
Data To Text Generation On Webnlg
Metrics
BLEU
Results
Performance results of various models on this benchmark
Columns
Model Name
BLEU
Paper Title
Code
Control Prefixes (A1, T5-large)
67.32
Control Prefixes for Parameter-Efficient Text Generation
Control Prefixes (A1, A2, T5-large)
67.15
Control Prefixes for Parameter-Efficient Text Generation
JointGT Baseline
67.08
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation
T5-B Baseline
67.04
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation
T5-large + Wiki + Position
66.07
Stage-wise Fine-tuning for Graph-to-Text Generation
HTML (fine-tuning)
65.4
HTLM: Hyper-Text Pre-Training and Prompting of Language Models
T5-small
65.05
Investigating Pretrained Language Models for Graph-to-Text Generation
TrICy (trK = trk* = 0.24)
64.73
TrICy: Trigger-guided Data-to-text Generation with Intent aware Attention-Copy
T5-Base
64.7
Text-to-Text Pre-Training for Data-to-Text Tasks
TrICy (trK = 0)
64.08
TrICy: Trigger-guided Data-to-text Generation with Intent aware Attention-Copy
CGE-LW (Levi Graph)
63.69
Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs
Multiview-G2S
62.89
Structural Information Preserving for Graph-to-Text Generation
Graformer
61.15
Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs
GTR-LSTM (entity masking)
58.6
GTR-LSTM: A Triple Encoder for Sentence Generation from RDF Data
E2E GRU
57.20
Neural data-to-text generation: A comparison between pipeline and end-to-end architectures
GCN EC
55.9
Deep Graph Convolutional Encoders for Structured Data to Text Generation
BestPlan
47.4
Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation
BART (TextBox 2.0)
-
TextBox 2.0: A Text Generation Library with Pre-trained Language Models
0 of 18 row(s) selected.
Previous
Next
HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
服务条款
隐私政策
中文
HyperAI
HyperAI
Toggle Sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
Console
Sign In
首页
SOTA
数据到文本生成
Data To Text Generation On Webnlg
Data To Text Generation On Webnlg
Metrics
BLEU
Results
Performance results of various models on this benchmark
Columns
Model Name
BLEU
Paper Title
Code
Control Prefixes (A1, T5-large)
67.32
Control Prefixes for Parameter-Efficient Text Generation
Control Prefixes (A1, A2, T5-large)
67.15
Control Prefixes for Parameter-Efficient Text Generation
JointGT Baseline
67.08
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation
T5-B Baseline
67.04
FactSpotter: Evaluating the Factual Faithfulness of Graph-to-Text Generation
T5-large + Wiki + Position
66.07
Stage-wise Fine-tuning for Graph-to-Text Generation
HTML (fine-tuning)
65.4
HTLM: Hyper-Text Pre-Training and Prompting of Language Models
T5-small
65.05
Investigating Pretrained Language Models for Graph-to-Text Generation
TrICy (trK = trk* = 0.24)
64.73
TrICy: Trigger-guided Data-to-text Generation with Intent aware Attention-Copy
T5-Base
64.7
Text-to-Text Pre-Training for Data-to-Text Tasks
TrICy (trK = 0)
64.08
TrICy: Trigger-guided Data-to-text Generation with Intent aware Attention-Copy
CGE-LW (Levi Graph)
63.69
Modeling Global and Local Node Contexts for Text Generation from Knowledge Graphs
Multiview-G2S
62.89
Structural Information Preserving for Graph-to-Text Generation
Graformer
61.15
Modeling Graph Structure via Relative Position for Text Generation from Knowledge Graphs
GTR-LSTM (entity masking)
58.6
GTR-LSTM: A Triple Encoder for Sentence Generation from RDF Data
E2E GRU
57.20
Neural data-to-text generation: A comparison between pipeline and end-to-end architectures
GCN EC
55.9
Deep Graph Convolutional Encoders for Structured Data to Text Generation
BestPlan
47.4
Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation
BART (TextBox 2.0)
-
TextBox 2.0: A Text Generation Library with Pre-trained Language Models
0 of 18 row(s) selected.
Previous
Next
Console
Console