Question Answering On Quasart T
评估指标
EM
评测结果
各个模型在此基准测试上的表现结果
模型名称 | EM | Paper Title | Repository |
---|---|---|---|
Multi-passage BERT | 51.1 | Multi-passage BERT: A Globally Normalized BERT Model for Open-domain Question Answering | - |
Sparse Attention | 52.1 | Generating Long Sequences with Sparse Transformers | |
Locality-Sensitive Hashing | 53.2 | Reformer: The Efficient Transformer | |
DECAPROP | 38.6 | Densely Connected Attention Propagation for Reading Comprehension | |
Denoising QA | 42.2 | Denoising Distantly Supervised Open-Domain Question Answering | |
Cluster-Former (#C=512) | 54 | Cluster-Former: Clustering-based Sparse Transformer for Long-Range Dependency Encoding | - |
DrQA | 37.7 | Reading Wikipedia to Answer Open-Domain Questions |
0 of 7 row(s) selected.