Command Palette
Search for a command to run...
Xiang Deng Yu Su Alyssa Lees You Wu Cong Yu Huan Sun

Abstract
We present ReasonBert, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts. Unlike existing pre-training methods that only harvest learning signals from local contexts of naturally occurring texts, we propose a generalized notion of distant supervision to automatically connect multiple pieces of text and tables to create pre-training examples that require long-range reasoning. Different types of reasoning are simulated, including intersecting multiple pieces of evidence, bridging from one piece of evidence to another, and detecting unanswerable cases. We conduct a comprehensive evaluation on a variety of extractive question answering datasets ranging from single-hop to multi-hop and from text-only to table-only to hybrid that require various reasoning capabilities and show that ReasonBert achieves remarkable improvement over an array of strong baselines. Few-shot experiments further demonstrate that our pre-training method substantially improves sample efficiency.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| question-answering-on-triviaqa | ReasonBERTB | F1: 37.2 |
| question-answering-on-triviaqa | ReasonBERTR | F1: 45.5 |
| semantic-parsing-on-graphquestions | ReasonBERTR | F1 Score: 41.3 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.