HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine

Yizhen Luo Jiahuan Zhang Siqi Fan Kai Yang Yushuai Wu Mu Qiao Zaiqing Nie

BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine

Abstract

Foundation models (FMs) have exhibited remarkable performance across a wide range of downstream tasks in many domains. Nevertheless, general-purpose FMs often face challenges when confronted with domain-specific problems, due to their limited access to the proprietary training data in a particular domain. In biomedicine, there are various biological modalities, such as molecules, proteins, and cells, which are encoded by the language of life and exhibit significant modality gaps with human natural language. In this paper, we introduce BioMedGPT, an open multimodal generative pre-trained transformer (GPT) for biomedicine, to bridge the gap between the language of life and human natural language. BioMedGPT allows users to easily ``communicate'' with diverse biological modalities through free text, which is the first of its kind. BioMedGPT aligns different biological modalities with natural language via a large generative language model, namely, BioMedGPT-LM. We publish BioMedGPT-10B, which unifies the feature spaces of molecules, proteins, and natural language via encoding and alignment. Through fine-tuning, BioMedGPT-10B outperforms or is on par with human and significantly larger general-purpose foundation models on the biomedical QA task. It also demonstrates promising performance in the molecule QA and protein QA tasks, which could greatly accelerate the discovery of new drugs and therapeutic targets. In addition, BioMedGPT-LM-7B is the first large generative language model based on Llama2 in the biomedical domain, therefore is commercial friendly. Both BioMedGPT-10B and BioMedGPT-LM-7B are open-sourced to the research community. In addition, we publish the datasets that are meticulously curated for the alignment of multi-modalities, i.e., PubChemQA and UniProtQA. All the models, codes, and datasets are available at \url{https://github.com/PharMolix/OpenBioMed}.

Code Repositories

pharmolix/openbiomed
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
few-shot-learning-on-medconceptsqaPharMolix/BioMedGPT-LM-7B
Accuracy: 24.924
multiple-choice-question-answering-mcqa-on-21BioMedGPT-10B
Test Set (Acc-%): 0.514
multiple-choice-question-answering-mcqa-on-25BioMedGPT-LM-7B
Accuracy: 51.1
question-answering-on-medqa-usmleBioMedGPT-10B
Accuracy: 50.4
question-answering-on-pubchemqaBioMedGPT-10B
BLEU-2: 0.234
BLEU-4: 0.141
MEATOR: 0.308
ROUGE-1: 0.386
ROUGE-2: 0.206
ROUGE-L: 0.332
question-answering-on-pubmedqaBioMedGPT-10B
Accuracy: 76.1
question-answering-on-uniprotqaBioMedGPT-10B
BLEU-2: 0.571
BLEU-4: 0.535
MEATOR: 0.754
ROUGE-1: 0.743
ROUGE-2: 0.759
ROUGE-L: 0.622
zero-shot-learning-on-medconceptsqaPharMolix/BioMedGPT-LM-7B
Accuracy: 24.747

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp