HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Improving Neural Parsing by Disentangling Model Combination and Reranking Effects

Daniel Fried; Mitchell Stern; Dan Klein

Improving Neural Parsing by Disentangling Model Combination and Reranking Effects

Abstract

Recent work has proposed several generative neural models for constituency parsing that achieve state-of-the-art results. Since direct search in these generative models is difficult, they have primarily been used to rescore candidate outputs from base parsers in which decoding is more straightforward. We first present an algorithm for direct search in these generative models. We then demonstrate that the rescoring results are at least partly due to implicit model combination rather than reranking effects. Finally, we show that explicit model combination can improve performance even further, resulting in new state-of-the-art numbers on the PTB of 94.25 F1 when training only on gold data and 94.66 F1 when using external data.

Benchmarks

BenchmarkMethodologyMetrics
constituency-parsing-on-penn-treebankModel combination
F1 score: 94.66

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Improving Neural Parsing by Disentangling Model Combination and Reranking Effects | Papers | HyperAI