HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
机器翻译
Machine Translation On Iwslt2014 German
Machine Translation On Iwslt2014 German
评估指标
BLEU score
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
BLEU score
Paper Title
Repository
PiNMT
40.43
Integrating Pre-trained Language Model into Neural Machine Translation
-
BiBERT
38.61
BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation
Bi-SimCut
38.37
Bi-SimCut: A Simple Strategy for Boosting Neural Machine Translation
Cutoff + Relaxed Attention + LM
37.96
Relaxed Attention for Transformer Models
DRDA
37.95
Deterministic Reversible Data Augmentation for Neural Machine Translation
Transformer + R-Drop + Cutoff
37.90
R-Drop: Regularized Dropout for Neural Networks
SimCut
37.81
Bi-SimCut: A Simple Strategy for Boosting Neural Machine Translation
Cutoff+Knee
37.78
Wide-minima Density Hypothesis and the Explore-Exploit Learning Rate Schedule
Cutoff
37.6
A Simple but Tough-to-Beat Data Augmentation Approach for Natural Language Understanding and Generation
CipherDAug
37.53
CipherDAug: Ciphertext based Data Augmentation for Neural Machine Translation
Transformer + R-Drop
37.25
R-Drop: Regularized Dropout for Neural Networks
Data Diversification
37.2
Data Diversification: A Simple Strategy For Neural Machine Translation
UniDrop
36.88
UniDrop: A Simple yet Effective Technique to Improve Transformer without Extra Cost
-
MixedRepresentations
36.41
Sequence Generation with Mixed Representations
-
Mask Attention Network (small)
36.3
Mask Attention Networks: Rethinking and Strengthen Transformer
MUSE(Parallel Multi-scale Attention)
36.3
MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning
MAT
36.22
Multi-branch Attentive Transformer
Transformer+Rep(Sim)+WDrop
36.22
Rethinking Perturbations in Encoder-Decoders for Fast Training
TransformerBase + AutoDropout
35.8
AutoDropout: Learning Dropout Patterns to Regularize Deep Networks
Local Joint Self-attention
35.7
Joint Source-Target Self Attention with Locality Constraints
0 of 34 row(s) selected.
Previous
Next
Machine Translation On Iwslt2014 German | SOTA | HyperAI超神经