Vibe-Eval
VIBE-Eval is a hard evaluation suite for measuring progress of multimodal language models, consisting of 269 visual understanding prompts with gold-standard responses authored by experts. The benchmark has dual objectives: vibe checking multimodal chat models for day-to-day tasks and rigorously testing frontier models, with the hard set containing >50% questions that all frontier models answer incorrectly.
Gemini 2.5 Pro Preview 06-05 from Google currently leads the Vibe-Eval leaderboard with a score of 0.672 across 8 evaluated AI models.
Gemini 2.5 Pro Preview 06-05 leads with 67.2%, followed by
Gemini 2.5 Pro at 65.6% and
Gemini 2.5 Flash at 65.4%.
Progress Over Time
Interactive timeline showing model performance evolution on Vibe-Eval
Vibe-Eval Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | — | 1.0M | $1.25 / $10.00 | |||
| 2 | Google | — | 1.0M | $1.25 / $10.00 | ||
| 3 | Google | — | 1.0M | $0.30 / $2.50 | ||
| 4 | Google | — | 1.0M | $0.10 / $0.40 | ||
| 5 | Google | — | 2.1M | $2.50 / $10.00 | ||
| 6 | Google | — | 1.0M | $0.10 / $0.40 | ||
| 7 | Google | — | 1.0M | $0.15 / $0.60 | ||
| 8 | Google | 8B | 1.0M | $0.07 / $0.30 |
FAQ
Common questions about Vibe-Eval.
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions
LiveCodeBench is a holistic and contamination-free evaluation benchmark for large language models for code. It continuously collects new problems from programming contests (LeetCode, AtCoder, CodeForces) and evaluates four different scenarios: code generation, self-repair, code execution, and test output prediction. Problems are annotated with release dates to enable evaluation on unseen problems released after a model's training cutoff.
Instruction-Following Evaluation (IFEval) benchmark for large language models, focusing on verifiable instructions with 25 types of instructions and around 500 prompts containing one or more verifiable constraints