HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Constituency Parsing with a Self-Attentive Encoder

Nikita Kitaev; Dan Klein

Constituency Parsing with a Self-Attentive Encoder

Abstract

We demonstrate that replacing an LSTM encoder with a self-attentive architecture can lead to improvements to a state-of-the-art discriminative constituency parser. The use of attention makes explicit the manner in which information is propagated between different locations in the sentence, which we use to both analyze our model and propose potential improvements. For example, we find that separating positional and content information in the encoder can lead to improved parsing accuracy. Additionally, we evaluate different approaches for lexical representation. Our parser achieves new state-of-the-art results for single models trained on the Penn Treebank: 93.55 F1 without the use of any external data, and 95.13 F1 when using pre-trained word representations. Our parser also outperforms the previous best-published accuracy figures on 8 of the 9 languages in the SPMRL dataset.

Code Repositories

napakalas/NLIMED
tf
Mentioned in GitHub
ringos/nfc-parser
pytorch
Mentioned in GitHub
nikitakit/self-attentive-parser
Official
tf
Mentioned in GitHub
asadovsky/nn
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
constituency-parsing-on-ctb5Kitaev etal. 2018
F1 score: 87.43
constituency-parsing-on-penn-treebankSelf-attentive encoder + ELMo
F1 score: 95.13

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp