Contextual DeBERTa-V3-Large + SSP | 0.919 | 0.945 | Context-Aware Transformer Pre-Training for Answer Sentence Selection | - |
RoBERTa-Base Joint + MSPP | 0.911 | 0.952 | Paragraph-based Transformer Pre-training for Multi-Sentence Inference | |
TANDA DeBERTa-V3-Large + ALL | 0.954 | 0.984 | Structural Self-Supervised Objectives for Transformers | - |