Open Domain Question Answering On Eli5
Metrics
Rouge-1
Rouge-2
Rouge-L
Results
Performance results of various models on this benchmark
Model Name | Rouge-1 | Rouge-2 | Rouge-L | Paper Title | Repository |
---|---|---|---|---|---|
QG | 29.15 | 10.36 | 26.40 | Closed-book Question Generation via Contrastive Learning | |
E-MCA | 30.0 | 5.8 | 24.0 | Using Local Knowledge Graph Construction to Scale Seq2Seq Models to Multi-Document Inputs | |
Multi-Inrerleave | 23.32 | 4.79 | 14.63 | Improving Conditioning in Context-Aware Sequence to Sequence Models | - |
Fourier Transformer | - | - | 26.9 | Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator | |
Transformer Multitask + LayerDrop | 29.4 | 5.5 | 23.4 | Reducing Transformer Depth on Demand with Structured Dropout | |
BART | 30.6 | 6.2 | 24.3 | BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension |
0 of 6 row(s) selected.