Command Palette
Search for a command to run...

Abstract
We introduce CogVLM, a powerful open-source visual language foundation model.Different from the popular shallow alignment method which maps image featuresinto the input space of language model, CogVLM bridges the gap between thefrozen pretrained language model and image encoder by a trainable visual expertmodule in the attention and FFN layers. As a result, CogVLM enables deep fusionof vision language features without sacrificing any performance on NLP tasks.CogVLM-17B achieves state-of-the-art performance on 10 classic cross-modalbenchmarks, including NoCaps, Flicker30k captioning, RefCOCO, RefCOCO+,RefCOCOg, Visual7W, GQA, ScienceQA, VizWiz VQA and TDIUC, and ranks the 2nd onVQAv2, OKVQA, TextVQA, COCO captioning, etc., surpassing or matching PaLI-X55B. Codes and checkpoints are available at https://github.com/THUDM/CogVLM.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| fs-mevqa-on-sme | GLM-4V | #Learning Samples (N): 16 ACC: 34.23 BLEU-4: 14.45 CIDEr: 127.37 Detection: 0.89 METEOR: 17.53 ROUGE-L: 24.28 SPICE: 17.70 |
| long-context-understanding-on-mmneedle | CogVLM2-Llama-3 | 1 Image, 2*2 Stitching, Exact Accuracy: 7.3 1 Image, 4*4 Stitching, Exact Accuracy: 0.9 1 Image, 8*8 Stitching, Exact Accuracy: 0.1 10 Images, 1*1 Stitching, Exact Accuracy: 0 10 Images, 2*2 Stitching, Exact Accuracy: 0 10 Images, 4*4 Stitching, Exact Accuracy: 0 10 Images, 8*8 Stitching, Exact Accuracy: 0 |
| long-context-understanding-on-mmneedle | CogVLM-17B | 1 Image, 2*2 Stitching, Exact Accuracy: 0 1 Image, 4*4 Stitching, Exact Accuracy: 0.1 1 Image, 8*8 Stitching, Exact Accuracy: 0.3 10 Images, 1*1 Stitching, Exact Accuracy: 0 10 Images, 2*2 Stitching, Exact Accuracy: 0 10 Images, 4*4 Stitching, Exact Accuracy: 0 10 Images, 8*8 Stitching, Exact Accuracy: 0 |
| visual-question-answering-on-mm-vet | GLM4 Vision | GPT-4 score: 63.9 |
| visual-question-answering-on-mm-vet | CogVLM(Vicuna-7B) | GPT-4 score: 52.8 Params: 17B |
| visual-question-answering-on-mm-vet-v2 | CogVLM-Chat | GPT-4 score: 45.1±0.2 |
| visual-question-answering-vqa-on-core-mm | CogVLM-Chat | Abductive: 47.88 Analogical: 28.75 Deductive: 36.75 Overall score: 37.16 Params: 17B |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.