HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
分子字幕生成
Molecule Captioning On Chebi 20
Molecule Captioning On Chebi 20
评估指标
BLEU-2
BLEU-4
METEOR
ROUGE-1
ROUGE-2
ROUGE-L
Text2Mol
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
BLEU-2
BLEU-4
METEOR
ROUGE-1
ROUGE-2
ROUGE-L
Text2Mol
Paper Title
Repository
Mol-LLM (Mistral-Instruct-v0.2)
73.2
-
-
-
-
-
-
-
-
Mol-LLM (LLaMA2-Chat)
72.70
-
-
-
-
-
-
-
-
MolReFlect
67.6
60.8
68.0
70.3
57.1
64.4
-
MolReFlect: Towards Fine-grained In-Context Alignment between Molecules and Texts
-
BioT5+
66.6
59.1
68.1
71.0
58.4
65.0
-
BioT5+: Towards Generalized Biological Understanding with IUPAC Integration and Multi-task Tuning
BioT5
63.5
55.6
65.6
69.2
55.9
63.3
60.3
BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations
Text+Chem T5-augm-Base
62.5
54.2
64.8
68.2
54.3
62.2
-
Unifying Molecular and Textual Representations via Multi-task Language Modelling
MolCA, Galac1.3B
62.0
53.1
65.1
68.1
53.7
61.8
-
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter
MolCA, Galac125M
61.6
52.9
63.9
67.4
53.3
61.5
-
MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter
Mol2Lang-VLM
61.2
52.7
63.3
67.4
53.2
61.4
59.8
Mol2Lang-VLM: Vision- and Text-Guided Generative Pre-trained Language Models for Advancing Molecule Captioning through Multimodal Fusion
-
MolReGPT (GPT-4-0314)
60.7
52.5
61.0
63.4
47.6
56.2
58.5
Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective
LaMolT5-Large
60.2
52.1
63.4
65.5
51.2
59.8
59.7
Automatic Annotation Augmentation Boosts Translation between Molecules and Natural Language
MoMu+MolT5-Large
59.9
51.5
59.7
-
-
-
58.2
A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural Language
PEIT-GEN
59.8
53.4
67.6
70.0
58.2
65.3
-
Property Enhanced Instruction Tuning for Multi-task Molecule Generation with Large Language Models
MolT5-Large
59.4
50.8
61.4
65.4
51.0
59.4
58.2
Translation between Molecules and Natural Language
MolXPT
59.4
50.5
62.6
66
51.1
59.7
59.4
MolXPT: Wrapping Molecules with Text for Generative Pre-training
Mol-LLM (SELFIES)
58.7
51.5
61.7
62.7
48.7
57.1
-
Mol-LLM: Multimodal Generalist Molecular LLM with Improved Graph Utilization
-
MolFM-Base
58.5
49.8
60.7
65.3
50.8
59.4
57.6
MolFM: A Multimodal Molecular Foundation Model
Text+Chem T5-Base
58
49
60.4
64.7
49.8
58.6
-
Unifying Molecular and Textual Representations via Multi-task Language Modelling
LaMolT5-Base
57.4
48.5
59.6
63.4
47.8
56.4
59.9
Automatic Annotation Augmentation Boosts Translation between Molecules and Natural Language
MolReGPT (GPT-3.5-turbo)
56.5
48.2
62.3
45.0
54.3
58.5
56.0
Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective
0 of 32 row(s) selected.
Previous
Next
Molecule Captioning On Chebi 20 | SOTA | HyperAI超神经