HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
机器翻译
Machine Translation On Wmt2016 English German
Machine Translation On Wmt2016 English German
评估指标
BLEU score
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
BLEU score
Paper Title
Repository
MADL
40.68
Multi-Agent Dual Learning
-
Attentional encoder-decoder + BPE
34.2
Edinburgh Neural Machine Translation Systems for WMT 16
Linguistic Input Features
28.4
Linguistic Input Features Improve Neural Machine Translation
DeLighT
28.0
DeLighT: Deep and Light-weight Transformer
FLAN 137B (zero-shot)
27.0
Finetuned Language Models Are Zero-Shot Learners
Transformer
26.7
On the adequacy of untuned warmup for adaptive optimization
FLAN 137B (few-shot, k=11)
26.1
Finetuned Language Models Are Zero-Shot Learners
BiRNN + GCN (Syn + Sem)
24.9
Exploiting Semantics in Neural Machine Translation with Graph Convolutional Networks
-
SMT + iterative backtranslation (unsupervised)
18.23
Unsupervised Statistical Machine Translation
Unsupervised NMT + weight-sharing
10.86
Unsupervised Neural Machine Translation with Weight Sharing
Unsupervised S2S with attention
9.64
Unsupervised Machine Translation Using Monolingual Corpora Only
Exploiting Mono at Scale (single)
-
Exploiting Monolingual Data at Scale for Neural Machine Translation
-
0 of 12 row(s) selected.
Previous
Next
Machine Translation On Wmt2016 English German | SOTA | HyperAI超神经