Command Palette
Search for a command to run...
Sheng Zhang; Xutai Ma; Kevin Duh; Benjamin Van Durme

Abstract
We propose an attention-based model that treats AMR parsing as sequence-to-graph transduction. Unlike most AMR parsers that rely on pre-trained aligners, external semantic resources, or data augmentation, our proposed parser is aligner-free, and it can be effectively trained with limited amounts of labeled AMR data. Our experimental results outperform all previously reported SMATCH scores, on both AMR 2.0 (76.3% F1 on LDC2017T10) and AMR 1.0 (70.2% F1 on LDC2014T12).
Code Repositories
sheng-z/stog
Official
pytorch
Mentioned in GitHub
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| amr-parsing-on-ldc2014t12 | Sequence-to-Graph Transduction | F1 Full: 0.70 F1 Newswire: 0.75 |
| amr-parsing-on-ldc2014t12-1 | Two-stage Sequence-to-Graph Transducer | F1 Full: 70.2 |
| amr-parsing-on-ldc2017t10 | Sequence-to-Graph Transduction | Smatch: 76.3 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.
AI Co-coding
Ready-to-use GPUs
Best Pricing
Hyper Newsletters
Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp