HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering

Shuohang Wang; Mo Yu; Jing Jiang; Wei Zhang; Xiaoxiao Guo; Shiyu Chang; Zhiguo Wang; Tim Klinger; Gerald Tesauro; Murray Campbell

Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering

Abstract

A popular recent approach to answering open-domain questions is to first search for question-related passages and then apply reading comprehension models to extract answers. Existing methods usually extract answers from single passages independently. But some questions require a combination of evidence from across different sources to answer correctly. In this paper, we propose two models which make use of multiple passages to generate their answers. Both use an answer-reranking approach which reorders the answer candidates generated by an existing state-of-the-art QA model. We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer. Our models have achieved state-of-the-art results on three public open-domain QA datasets: Quasar-T, SearchQA and the open-domain version of TriviaQA, with about 8 percentage points of improvement over the former two datasets.

Code Repositories

shuohangwang/mprc
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
open-domain-question-answering-on-quasarEvidence Aggregation via R^3 Re-Ranking
EM (Quasar-T): 42.3
F1 (Quasar-T): 49.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp