ScreenSpot

ScreenSpot is the first realistic GUI grounding benchmark that encompasses mobile, desktop, and web environments. The dataset comprises over 1,200 instructions from iOS, Android, macOS, Windows and Web environments, along with annotated element types (text and icon/widget), designed to evaluate visual GUI agents' ability to accurately locate screen elements based on natural language instructions.

Qwen3 VL 32B Instruct from Alibaba Cloud / Qwen Team currently leads the ScreenSpot leaderboard with a score of 0.958 across 13 evaluated AI models.

Paper

Alibaba Cloud / Qwen TeamQwen3 VL 32B Instruct leads with 95.8%, followed by Alibaba Cloud / Qwen TeamQwen3 VL 32B Thinking at 95.7% and Alibaba Cloud / Qwen TeamQwen3 VL 235B A22B Instruct at 95.4%.

Progress Over Time

Interactive timeline showing model performance evolution on ScreenSpot

State-of-the-art frontier
Open
Proprietary

ScreenSpot Leaderboard

13 models
ContextCostLicense
1
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
33B
2
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
33B
3
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
236B262K$0.30 / $1.49
3
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
236B262K$0.45 / $3.49
5
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
31B262K$0.20 / $0.70
5
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
31B262K$0.20 / $1.00
7
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
9B262K$0.08 / $0.50
8
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
4B262K$0.10 / $0.60
9
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
9B262K$0.18 / $2.09
10
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
4B262K$0.10 / $1.00
11
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
34B
12
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
72B
13
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
8B
Notice missing or incorrect data?

FAQ

Common questions about ScreenSpot.

What is the ScreenSpot benchmark?

ScreenSpot is the first realistic GUI grounding benchmark that encompasses mobile, desktop, and web environments. The dataset comprises over 1,200 instructions from iOS, Android, macOS, Windows and Web environments, along with annotated element types (text and icon/widget), designed to evaluate visual GUI agents' ability to accurately locate screen elements based on natural language instructions.

What is the ScreenSpot leaderboard?

The ScreenSpot leaderboard ranks 13 AI models based on their performance on this benchmark. Currently, Qwen3 VL 32B Instruct by Alibaba Cloud / Qwen Team leads with a score of 0.958. The average score across all models is 0.928.

What is the highest ScreenSpot score?

The highest ScreenSpot score is 0.958, achieved by Qwen3 VL 32B Instruct from Alibaba Cloud / Qwen Team.

How many models are evaluated on ScreenSpot?

13 models have been evaluated on the ScreenSpot benchmark, with 0 verified results and 13 self-reported results.

Where can I find the ScreenSpot paper?

The ScreenSpot paper is available at https://arxiv.org/abs/2401.10935. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does ScreenSpot cover?

ScreenSpot is categorized under grounding, multimodal, spatial reasoning, and vision. The benchmark evaluates multimodal models.

More evaluations to explore

Related benchmarks in the same category

View all grounding
Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

visionmultimodal
74 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

multimodalmultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

multimodalmultimodal
48 models
MathVista

MathVista evaluates mathematical reasoning of foundation models in visual contexts. It consists of 6,141 examples derived from 28 existing multimodal datasets and 3 newly created datasets (IQTest, FunctionQA, and PaperQA), combining challenges from diverse mathematical and visual tasks to assess models' ability to understand complex figures and perform rigorous reasoning.

multimodalmultimodal
36 models
CharXiv-R

CharXiv-R is the reasoning component of the CharXiv benchmark, focusing on complex reasoning questions that require synthesizing information across visual chart elements. It evaluates multimodal large language models on their ability to understand and reason about scientific charts from arXiv papers through various reasoning tasks.

multimodalmultimodal
35 models
AI2D

AI2D is a dataset of 4,903 illustrative diagrams from grade school natural sciences (such as food webs, human physiology, and life cycles) with over 15,000 multiple choice questions and answers. The benchmark evaluates diagram understanding and visual reasoning capabilities, requiring models to interpret diagrammatic elements, relationships, and structure to answer questions about scientific concepts represented in visual form.

multimodalmultimodal
32 models