HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Structural Information Preserving for Graph-to-Text Generation

Linfeng Song Ante Wang Jinsong Su Yue Zhang Kun Xu Yubin Ge Dong Yu

Structural Information Preserving for Graph-to-Text Generation

Abstract

The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs. As a crucial defect, the current state-of-the-art models may mess up or even drop the core structural information of input graphs when generating outputs. We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information. In particular, we introduce two types of autoencoding losses, each individually focusing on different aspects (a.k.a. views) of input graphs. The losses are then back-propagated to better calibrate our model via multi-task training. Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline. Our code is available at \url{http://github.com/Soistesimmer/AMR-multiview}.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
data-to-text-generation-on-webnlgMultiview-G2S
BLEU: 62.89

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp