Command Palette
Search for a command to run...
Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining
Grigorii Guz Patrick Huber Giuseppe Carenini

Abstract
RST-based discourse parsing is an important NLP task with numerous downstream applications, such as summarization, machine translation and opinion mining. In this paper, we demonstrate a simple, yet highly accurate discourse parser, incorporating recent contextual language models. Our parser establishes the new state-of-the-art (SOTA) performance for predicting structure and nuclearity on two key RST datasets, RST-DT and Instr-DT. We further demonstrate that pretraining our parser on the recently available large-scale "silver-standard" discourse treebank MEGA-DT provides even larger performance benefits, suggesting a novel and promising research direction in the field of discourse analysis.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| discourse-parsing-on-instructional-dt-instr | Guz et al. (2020) | Standard Parseval (Nuclearity): 44.41 Standard Parseval (Span): 64.55 |
| discourse-parsing-on-instructional-dt-instr | Guz et al. (2020) (pretrained) | Standard Parseval (Nuclearity): 46.59 Standard Parseval (Span): 65.41 |
| discourse-parsing-on-rst-dt | Guz et al. (2020) | Standard Parseval (Nuclearity): 61.38 Standard Parseval (Span): 72.43 |
| discourse-parsing-on-rst-dt | Guz et al. (2020) (pretrained) | Standard Parseval (Nuclearity): 61.86 Standard Parseval (Span): 72.94 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.