Command Palette
Search for a command to run...

Abstract
This report introduces a new family of multimodal models, Gemini, thatexhibit remarkable capabilities across image, audio, video, and textunderstanding. The Gemini family consists of Ultra, Pro, and Nano sizes,suitable for applications ranging from complex reasoning tasks to on-devicememory-constrained use-cases. Evaluation on a broad range of benchmarks showsthat our most-capable Gemini Ultra model advances the state of the art in 30 of32 of these benchmarks - notably being the first model to achieve human-expertperformance on the well-studied exam benchmark MMLU, and improving the state ofthe art in every one of the 20 multimodal benchmarks we examined. We believethat the new capabilities of the Gemini family in cross-modal reasoning andlanguage understanding will enable a wide variety of use cases. We discuss ourapproach toward post-training and deploying Gemini models responsibly to usersthrough services including Gemini, Gemini Advanced, Google AI Studio, and CloudVertex AI.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| arithmetic-reasoning-on-gsm8k | Gemini Pro (maj1@32) | Accuracy: 86.5 |
| chart-question-answering-on-chartqa | Gemini Ultra | 1:1 Accuracy: 80.8 |
| long-context-understanding-on-mmneedle | Gemini Pro 1.0 | 1 Image, 2*2 Stitching, Exact Accuracy: 29.53 1 Image, 4*4 Stitching, Exact Accuracy: 24.78 1 Image, 8*8 Stitching, Exact Accuracy: 2.11 10 Images, 1*1 Stitching, Exact Accuracy: 16.25 10 Images, 2*2 Stitching, Exact Accuracy: 4.82 10 Images, 4*4 Stitching, Exact Accuracy: 0.4 10 Images, 8*8 Stitching, Exact Accuracy: 0 |
| math-word-problem-solving-on-math | Gemini Pro (4-shot) | Accuracy: 32.6 |
| math-word-problem-solving-on-math | Gemini Ultra (4-shot) | Accuracy: 53.2 |
| temporal-casual-qa-on-next-qa | Gemini Ultra (zero-shot) | WUPS: 29.9 |
| temporal-casual-qa-on-next-qa | Gemini Pro (zero-shot) | WUPS: 28.0 |
| visual-question-answering-on-mm-vet | Gemini 1.0 Pro Vision (gemini-pro-vision) | GPT-4 score: 64.3±0.4 |
| visual-question-answering-on-mm-vet-v2 | Gemini Pro Vision | GPT-4 score: 57.2±0.2 |
| visual-question-answering-vqa-on | Gemini Ultra (pixel only) | ANLS: 80.3 |
| visual-question-answering-vqa-on-ai2d | Gemini Ultra | EM: 79.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.