Command Palette
Search for a command to run...
Chunchuan Lyu; Ivan Titov

Abstract
Abstract meaning representations (AMRs) are broad-coverage sentence-level semantic representations. AMRs represent sentences as rooted labeled directed acyclic graphs. AMR parsing is challenging partly due to the lack of annotated alignments between nodes in the graphs and words in the corresponding sentences. We introduce a neural parser which treats alignments as latent variables within a joint probabilistic model of concepts, relations and alignments. As exact inference requires marginalizing over alignments and is infeasible, we use the variational auto-encoding framework and a continuous relaxation of the discrete alignments. We show that joint modeling is preferable to using a pipeline of align and parse. The parser achieves the best reported results on the standard benchmark (74.4% on LDC2016E25).
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| amr-parsing-on-ldc2015e86-1 | Joint model | Smatch: 73.7 |
| amr-parsing-on-ldc2017t10 | Joint model | Smatch: 74.4 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.