Command Palette
Search for a command to run...
Fajri Koto Jey Han Lau Timothy Baldwin

Abstract
We introduce a top-down approach to discourse parsing that is conceptually simpler than its predecessors (Kobayashi et al., 2020; Zhang et al., 2020). By framing the task as a sequence labelling problem where the goal is to iteratively segment a document into individual discourse units, we are able to eliminate the decoder and reduce the search space for splitting points. We explore both traditional recurrent models and modern pre-trained transformer models for the task, and additionally introduce a novel dynamic oracle for top-down parsing. Based on the Full metric, our proposed LSTM model sets a new state-of-the-art for RST parsing.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| discourse-parsing-on-rst-dt | LSTM Dynamic | Standard Parseval (Full): 50.3 Standard Parseval (Nuclearity): 62.3 Standard Parseval (Relation): 51.5 Standard Parseval (Span): 73.1 |
| discourse-parsing-on-rst-dt | Transformer (dynamic) | Standard Parseval (Full): 49.2 Standard Parseval (Nuclearity): 60.1 Standard Parseval (Span): 70.2 |
| discourse-parsing-on-rst-dt | Transformer (static) | Standard Parseval (Full): 49.0 Standard Parseval (Nuclearity): 59.9 Standard Parseval (Relation): 50.6 Standard Parseval (Span): 70.6 |
| discourse-parsing-on-rst-dt | LSTM Static | Standard Parseval (Full): 49.4 Standard Parseval (Nuclearity): 61.7 Standard Parseval (Relation): 50.5 Standard Parseval (Span): 72.7 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.