Match-LSTM with Bi-Ans-Ptr (Boundary+Search+b) | 64.1 | 64.7 | Machine Comprehension Using Match-LSTM and Answer Pointer | |
DistilBERT-uncased-PruneOFA (90% unstruct sparse, QAT Int8) | 75.62 | 83.87 | Prune Once for All: Sparse Pre-Trained Language Models | |
BERT-Large-uncased-PruneOFA (90% unstruct sparse, QAT Int8) | 83.22 | 90.02 | Prune Once for All: Sparse Pre-Trained Language Models | |
BERT-Base-uncased-PruneOFA (85% unstruct sparse) | 81.1 | 88.42 | Prune Once for All: Sparse Pre-Trained Language Models | |
BiDAF + Self Attention + ELMo | - | 85.6 | Deep contextualized word representations | |