Command Palette
Search for a command to run...
Pavlo Vasylenko; Pere-Lluís Huguet Cabot; Abelardo Carlos Martínez Lorenzo; Roberto Navigli

Abstract
Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches are based on autoregressive language models such as BART or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we present LeakDistill, a model and method that explores a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing through self-knowledge distillation, even without the use of additional data. We release the code at \url{http://www.github.com/sapienzanlp/LeakDistill}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| amr-parsing-on-ldc2017t10 | LeakDistill | Smatch: 86.1 |
| amr-parsing-on-ldc2017t10 | LeakDistill (base) | Smatch: 84.7 |
| amr-parsing-on-ldc2020t02 | LeakDistill (base) | Smatch: 83.5 |
| amr-parsing-on-ldc2020t02 | LeakDistill | Smatch: 84.6 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.