SimpleVQA

SimpleVQA is a visual question answering benchmark focused on simple queries.

GLM-5V-Turbo from Zhipu AI currently leads the SimpleVQA leaderboard with a score of 0.782 across 10 evaluated AI models.

Zhipu AIGLM-5V-Turbo leads with 0.8%, followed by MetaMuse Spark at 0.7% and Moonshot AIKimi K2.5 at 0.7%.

Progress Over Time

Interactive timeline showing model performance evolution on SimpleVQA

State-of-the-art frontier
Open
Proprietary

SimpleVQA Leaderboard

10 models
ContextCostLicense
1
Zhipu AI
Zhipu AI
2
3
Moonshot AI
Moonshot AI
1.0T262K$0.60 / $3.00
4
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
1.0M$0.50 / $3.00
5
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
122B262K$0.40 / $3.20
6
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
236B262K$0.45 / $3.49
7
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B
8
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B262K$0.25 / $2.00
9
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
28B262K$0.60 / $3.60
10
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
27B262K$0.30 / $2.40
Notice missing or incorrect data?

FAQ

Common questions about SimpleVQA.

What is the SimpleVQA benchmark?

SimpleVQA is a visual question answering benchmark focused on simple queries.

What is the SimpleVQA leaderboard?

The SimpleVQA leaderboard ranks 10 AI models based on their performance on this benchmark. Currently, GLM-5V-Turbo by Zhipu AI leads with a score of 0.782. The average score across all models is 0.640.

What is the highest SimpleVQA score?

The highest SimpleVQA score is 0.782, achieved by GLM-5V-Turbo from Zhipu AI.

How many models are evaluated on SimpleVQA?

10 models have been evaluated on the SimpleVQA benchmark, with 0 verified results and 10 self-reported results.

What categories does SimpleVQA cover?

SimpleVQA is categorized under general, image to text, multimodal, and vision. The benchmark evaluates multimodal models.

More evaluations to explore

Related benchmarks in the same category

View all general
GPQA

A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.

general
213 models
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

general
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

general
99 models
Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

visionmultimodal
74 models
LiveCodeBench

LiveCodeBench is a holistic and contamination-free evaluation benchmark for large language models for code. It continuously collects new problems from programming contests (LeetCode, AtCoder, CodeForces) and evaluates four different scenarios: code generation, self-repair, code execution, and test output prediction. Problems are annotated with release dates to enable evaluation on unseen problems released after a model's training cutoff.

general
71 models
IFEval

Instruction-Following Evaluation (IFEval) benchmark for large language models, focusing on verifiable instructions with 25 types of instructions and around 500 prompts containing one or more verifiable constraints

general
63 models