Command Palette
Search for a command to run...
David Weiss; Chris Alberti; Michael Collins; Slav Petrov

Abstract
We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge is the best accuracy on Stanford Dependencies to date. We also provide in-depth ablative analysis to determine which aspects of our model provide the largest gains in accuracy.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| dependency-parsing-on-penn-treebank | Weiss et al. | LAS: 92.06 POS: 97.3 UAS: 94.01 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.