HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

An Empirical Study of Building a Strong Baseline for Constituency Parsing

{Makoto Morishita Sho Takase Masaaki Nagata Jun Suzuki Hidetaka Kamigaito}

An Empirical Study of Building a Strong Baseline for Constituency Parsing

Abstract

This paper investigates the construction of a strong baseline based on general purpose sequence-to-sequence models for constituency parsing. We incorporate several techniques that were mainly developed in natural language generation tasks, e.g., machine translation and summarization, and demonstrate that the sequence-to-sequence model achieves the current top-notch parsers{'} performance (almost) without requiring any explicit task-specific knowledge or architecture of constituent parsing.

Benchmarks

BenchmarkMethodologyMetrics
constituency-parsing-on-penn-treebankLSTM Encoder-Decoder + LSTM-LM
F1 score: 94.32

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
An Empirical Study of Building a Strong Baseline for Constituency Parsing | Papers | HyperAI