QwenWebBench

QwenWebBench is an internal front-end code generation benchmark by Qwen. It is bilingual (EN/CN) and spans 7 categories (Web Design, Web Apps, Games, SVG, Data Visualization, Animation, and 3D), using auto-render plus a multimodal judge for code and visual correctness. Scores are reported as BT/Elo ratings.

Qwen3.6-27B from Alibaba Cloud / Qwen Team currently leads the QwenWebBench leaderboard with a score of 1487.000 across 1 evaluated AI models.

Alibaba Cloud / Qwen TeamQwen3.6-27B leads with 1487.000.

Progress Over Time

Interactive timeline showing model performance evolution on QwenWebBench

State-of-the-art frontier
Open
Proprietary

QwenWebBench Leaderboard

1 models
ContextCostLicense
1
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
28B262K$0.60 / $3.60
Notice missing or incorrect data?

FAQ

Common questions about QwenWebBench.

What is the QwenWebBench benchmark?

QwenWebBench is an internal front-end code generation benchmark by Qwen. It is bilingual (EN/CN) and spans 7 categories (Web Design, Web Apps, Games, SVG, Data Visualization, Animation, and 3D), using auto-render plus a multimodal judge for code and visual correctness. Scores are reported as BT/Elo ratings.

What is the QwenWebBench leaderboard?

The QwenWebBench leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Qwen3.6-27B by Alibaba Cloud / Qwen Team leads with a score of 1487.000. The average score across all models is 1487.000.

What is the highest QwenWebBench score?

The highest QwenWebBench score is 1487.000, achieved by Qwen3.6-27B from Alibaba Cloud / Qwen Team.

How many models are evaluated on QwenWebBench?

1 models have been evaluated on the QwenWebBench benchmark, with 0 verified results and 1 self-reported results.

What categories does QwenWebBench cover?

QwenWebBench is categorized under agents, coding, and multimodal. The benchmark evaluates multimodal models with multilingual support.

More evaluations to explore

Related benchmarks in the same category

View all agents
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

multimodalmultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

multimodalmultimodal
47 models
BrowseComp

BrowseComp is a benchmark comprising 1,266 questions that challenge AI agents to persistently navigate the internet in search of hard-to-find, entangled information. The benchmark measures agents' ability to exercise persistence in information gathering, demonstrate creativity in web navigation, and find concise, verifiable answers. Despite the difficulty of the questions, BrowseComp is simple and easy-to-use, as predicted answers are short and easily verifiable against reference answers.

agents
45 models
Terminal-Bench 2.0

Terminal-Bench 2.0 is an updated benchmark for testing AI agents' tool use ability to operate a computer via terminal. It evaluates how well models can handle real-world, end-to-end tasks autonomously, including compiling code, training models, setting up servers, system administration, security tasks, data science workflows, and cybersecurity vulnerabilities.

agents
39 models
MathVista

MathVista evaluates mathematical reasoning of foundation models in visual contexts. It consists of 6,141 examples derived from 28 existing multimodal datasets and 3 newly created datasets (IQTest, FunctionQA, and PaperQA), combining challenges from diverse mathematical and visual tasks to assess models' ability to understand complex figures and perform rigorous reasoning.

multimodalmultimodal
36 models
CharXiv-R

CharXiv-R is the reasoning component of the CharXiv benchmark, focusing on complex reasoning questions that require synthesizing information across visual chart elements. It evaluates multimodal large language models on their ability to understand and reason about scientific charts from arXiv papers through various reasoning tasks.

multimodalmultimodal
34 models