HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Higher-Order Syntactic Attention Network for Longer Sentence Compression

{Tsutomu Hirao Katsuhiko Hayashi Masaaki Nagata Hidetaka Kamigaito}

Higher-Order Syntactic Attention Network for Longer Sentence Compression

Abstract

A sentence compression method using LSTM can generate fluent compressed sentences. However, the performance of this method is significantly degraded when compressing longer sentences since it does not explicitly handle syntactic features. To solve this problem, we propose a higher-order syntactic attention network (HiSAN) that can handle higher-order dependency features as an attention distribution on LSTM hidden states. Furthermore, to avoid the influence of incorrect parse results, we trained HiSAN by maximizing jointly the probability of a correct output with the attention distribution. Experimental results on Google sentence compression dataset showed that our method achieved the best performance on F1 as well as ROUGE-1,2 and L scores, 83.2, 82.9, 75.8 and 82.7, respectively. In human evaluation, our methods also outperformed baseline methods in both readability and informativeness.

Benchmarks

BenchmarkMethodologyMetrics
sentence-compression-on-google-datasetHigher-Order Syntactic Attention Network
F1: 0.835

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Higher-Order Syntactic Attention Network for Longer Sentence Compression | Papers | HyperAI