Design2Code

Design2Code evaluates the ability to generate code (HTML/CSS/JS) from visual designs.

GLM-5V-Turbo from Zhipu AI currently leads the Design2Code leaderboard with a score of 0.948 across 2 evaluated AI models.

Zhipu AIGLM-5V-Turbo leads with 0.9%, followed by Alibaba Cloud / Qwen TeamQwen3 VL 235B A22B Thinking at 0.9%.

Progress Over Time

Interactive timeline showing model performance evolution on Design2Code

State-of-the-art frontier
Open
Proprietary

Design2Code Leaderboard

2 models
ContextCostLicense
1
Zhipu AI
Zhipu AI
2
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
236B262K$0.45 / $3.49
Notice missing or incorrect data?

FAQ

Common questions about Design2Code.

What is the Design2Code benchmark?

Design2Code evaluates the ability to generate code (HTML/CSS/JS) from visual designs.

What is the Design2Code leaderboard?

The Design2Code leaderboard ranks 2 AI models based on their performance on this benchmark. Currently, GLM-5V-Turbo by Zhipu AI leads with a score of 0.948. The average score across all models is 0.941.

What is the highest Design2Code score?

The highest Design2Code score is 0.948, achieved by GLM-5V-Turbo from Zhipu AI.

How many models are evaluated on Design2Code?

2 models have been evaluated on the Design2Code benchmark, with 0 verified results and 2 self-reported results.

What categories does Design2Code cover?

Design2Code is categorized under code, multimodal, and vision. The benchmark evaluates image models.

More evaluations to explore

Related benchmarks in the same category

View all code
SWE-Bench Verified

A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.

code
89 models
Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

visionmultimodal
74 models
LiveCodeBench

LiveCodeBench is a holistic and contamination-free evaluation benchmark for large language models for code. It continuously collects new problems from programming contests (LeetCode, AtCoder, CodeForces) and evaluates four different scenarios: code generation, self-repair, code execution, and test output prediction. Problems are annotated with release dates to enable evaluation on unseen problems released after a model's training cutoff.

code
71 models
HumanEval

A benchmark that measures functional correctness for synthesizing programs from docstrings, consisting of 164 original programming problems assessing language comprehension, algorithms, and simple mathematics

code
66 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

multimodalmultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

multimodalmultimodal
47 models