VisualWebBench

A multimodal benchmark designed to assess the capabilities of multimodal large language models (MLLMs) across web page understanding and grounding tasks. Comprises 7 tasks (captioning, webpage QA, heading OCR, element OCR, element grounding, action prediction, and action grounding) with 1.5K human-curated instances from 139 real websites across 87 sub-domains.

Nova Pro from Amazon currently leads the VisualWebBench leaderboard with a score of 0.797 across 2 evaluated AI models.

Paper

AmazonNova Pro leads with 79.7%, followed by AmazonNova Lite at 77.7%.

Progress Over Time

Interactive timeline showing model performance evolution on VisualWebBench

State-of-the-art frontier
Open
Proprietary

VisualWebBench Leaderboard

2 models
ContextCostLicense
1
Amazon
Amazon
300K$0.80 / $3.20
2
Amazon
Amazon
300K$0.06 / $0.24
Notice missing or incorrect data?

FAQ

Common questions about VisualWebBench.

What is the VisualWebBench benchmark?

A multimodal benchmark designed to assess the capabilities of multimodal large language models (MLLMs) across web page understanding and grounding tasks. Comprises 7 tasks (captioning, webpage QA, heading OCR, element OCR, element grounding, action prediction, and action grounding) with 1.5K human-curated instances from 139 real websites across 87 sub-domains.

What is the VisualWebBench leaderboard?

The VisualWebBench leaderboard ranks 2 AI models based on their performance on this benchmark. Currently, Nova Pro by Amazon leads with a score of 0.797. The average score across all models is 0.787.

What is the highest VisualWebBench score?

The highest VisualWebBench score is 0.797, achieved by Nova Pro from Amazon.

How many models are evaluated on VisualWebBench?

2 models have been evaluated on the VisualWebBench benchmark, with 0 verified results and 2 self-reported results.

Where can I find the VisualWebBench paper?

The VisualWebBench paper is available at https://arxiv.org/abs/2404.05955. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does VisualWebBench cover?

VisualWebBench is categorized under frontend development, multimodal, and vision. The benchmark evaluates multimodal models.

More evaluations to explore

Related benchmarks in the same category

View all frontend development
SWE-Bench Verified

A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.

frontend development
89 models
Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

visionmultimodal
74 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

multimodalmultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

multimodalmultimodal
47 models
MathVista

MathVista evaluates mathematical reasoning of foundation models in visual contexts. It consists of 6,141 examples derived from 28 existing multimodal datasets and 3 newly created datasets (IQTest, FunctionQA, and PaperQA), combining challenges from diverse mathematical and visual tasks to assess models' ability to understand complex figures and perform rigorous reasoning.

multimodalmultimodal
36 models
CharXiv-R

CharXiv-R is the reasoning component of the CharXiv benchmark, focusing on complex reasoning questions that require synthesizing information across visual chart elements. It evaluates multimodal large language models on their ability to understand and reason about scientific charts from arXiv papers through various reasoning tasks.

multimodalmultimodal
34 models