HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Retrospective Reader for Machine Reading Comprehension

Zhuosheng Zhang Junjie Yang Hai Zhao

Retrospective Reader for Machine Reading Comprehension

Abstract

Machine reading comprehension (MRC) is an AI challenge that requires machine to determine the correct answers to questions based on a given passage. MRC systems must not only answer question when necessary but also distinguish when no answer is available according to the given passage and then tactfully abstain from answering. When unanswerable questions are involved in the MRC task, an essential verification module called verifier is especially required in addition to the encoder, though the latest practice on MRC modeling still most benefits from adopting well pre-trained language models as the encoder block by only focusing on the "reading". This paper devotes itself to exploring better verifier design for the MRC task with unanswerable questions. Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction. The proposed reader is evaluated on two benchmark MRC challenge datasets SQuAD2.0 and NewsQA, achieving new state-of-the-art results. Significance tests show that our model is significantly better than the strong ELECTRA and ALBERT baselines. A series of analysis is also conducted to interpret the effectiveness of the proposed reader.

Code Repositories

cooelf/AwesomeMRC
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
question-answering-on-squad20Retro-Reader (ensemble)
EM: 90.578
F1: 92.978
question-answering-on-squad20Retro-Reader on ELECTRA (single model)
EM: 89.562
F1: 92.052
question-answering-on-squad20Retro-Reader on ALBERT (ensemble)
EM: 90.115
F1: 92.580
question-answering-on-squad20Retro-Reader on ALBERT (single model)
EM: 88.107
F1: 91.419

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp