MLVU-M
MLVU-M benchmark
Qwen3 VL 32B Instruct from Alibaba Cloud / Qwen Team currently leads the MLVU-M leaderboard with a score of 0.821 across 8 evaluated AI models.
Qwen3 VL 32B Instruct leads with 82.1%, followed by
Qwen3 VL 30B A3B Instruct at 81.3% and
Qwen3 VL 30B A3B Thinking at 78.9%.
Progress Over Time
Interactive timeline showing model performance evolution on MLVU-M
MLVU-M Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 2 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $0.70 | ||
| 3 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $1.00 | ||
| 4 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.08 / $0.50 | ||
| 5 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $1.00 | ||
| 6 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $0.60 | ||
| 7 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.18 / $2.09 | ||
| 8 | Alibaba Cloud / Qwen Team | 72B | — | — |
FAQ
Common questions about MLVU-M.
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
LiveCodeBench is a holistic and contamination-free evaluation benchmark for large language models for code. It continuously collects new problems from programming contests (LeetCode, AtCoder, CodeForces) and evaluates four different scenarios: code generation, self-repair, code execution, and test output prediction. Problems are annotated with release dates to enable evaluation on unseen problems released after a model's training cutoff.
Instruction-Following Evaluation (IFEval) benchmark for large language models, focusing on verifiable instructions with 25 types of instructions and around 500 prompts containing one or more verifiable constraints
MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.