Command Palette
Search for a command to run...
Xuefeng Bai Yulong Chen Yue Zhang

Abstract
Abstract meaning representation (AMR) highlights the core semantic information of text in a graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of AMR parsing and AMR-to-text generation, respectively. However, PLMs are typically pre-trained on textual data, thus are sub-optimal for modeling structural knowledge. To this end, we investigate graph self-supervised training to improve the structure awareness of PLMs over AMR graphs. In particular, we introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training. We further design a unified framework to bridge the gap between pre-training and fine-tuning tasks. Experiments on both AMR parsing and AMR-to-text generation show the superiority of our model. To our knowledge, we are the first to consider pre-training on semantic graphs.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| amr-parsing-on-bio | AMRBART large | Smatch: 63.2 |
| amr-parsing-on-ldc2017t10 | AMRBART large | Smatch: 85.4 |
| amr-parsing-on-ldc2020t02 | AMRBART large | Smatch: 84.2 |
| amr-parsing-on-new3 | AMRBART large | Smatch: 76.9 |
| amr-parsing-on-the-little-prince | AMRBART large | Smatch: 79.8 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.