HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
问答
Question Answering On Trecqa
Question Answering On Trecqa
评估指标
MAP
MRR
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
MAP
MRR
Paper Title
Repository
TANDA DeBERTa-V3-Large + ALL
0.954
0.984
Structural Self-Supervised Objectives for Transformers
TANDA-RoBERTa (ASNQ, TREC-QA)
0.943
0.974
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
DeBERTa-V3-Large + SSP
0.923
0.946
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
-
Contextual DeBERTa-V3-Large + SSP
0.919
0.945
Context-Aware Transformer Pre-Training for Answer Sentence Selection
-
RLAS-BIABC
0.913
0.998
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
-
RoBERTa-Base Joint + MSPP
0.911
0.952
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
RoBERTa-Base + PSD
0.903
0.951
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
-
Comp-Clip + LM + LC
0.868
0.928
A Compare-Aggregate Model with Latent Clustering for Answer Selection
-
NLP-Capsule
0.7773
0.7416
Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications
HyperQA
0.770
0.825
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
PWIN
0.7588
0.8219
-
-
aNMM
0.750
0.811
aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model
CNN
0.711
0.785
Deep Learning for Answer Sentence Selection
0 of 13 row(s) selected.
Previous
Next
Question Answering On Trecqa | SOTA | HyperAI超神经