HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Making Neural QA as Simple as Possible but not Simpler

Dirk Weissenborn; Georg Wiese; Laura Seiffe

Making Neural QA as Simple as Possible but not Simpler

Abstract

Recent development of large-scale question answering (QA) datasets triggered a substantial amount of research into end-to-end neural architectures for QA. Increasingly complex systems have been conceived without comparison to simpler neural baseline systems that would justify their complexity. In this work, we propose a simple heuristic that guides the development of neural baseline systems for the extractive QA task. We find that there are two ingredients necessary for building a high-performing neural QA system: first, the awareness of question words while processing the context and second, a composition function that goes beyond simple bag-of-words modeling, such as recurrent neural networks. Our results show that FastQA, a system that meets these two requirements, can achieve very competitive performance compared with existing models. We argue that this surprising finding puts results of previous systems and the complexity of recent QA datasets into perspective.

Code Repositories

uclnlp/jack
tf
Mentioned in GitHub
newmast/QA-Deep-Learning
Mentioned in GitHub
uclmr/jack
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
question-answering-on-newsqaFastQAExt
EM: 43.7
F1: 56.1
question-answering-on-squad11FastQAExt
EM: 70.849
F1: 78.857
question-answering-on-squad11FastQA
EM: 68.436
F1: 77.070
question-answering-on-squad11-devFastQAExt (beam-size 5)
EM: 70.3
F1: 78.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp