LongText-Bench

LongText-Bench evaluates text-to-image models on their ability to accurately render long text passages within generated images. It includes English (EN) and Chinese (ZH) subsets to assess multilingual text rendering capabilities.

GLM-Image from Zhipu AI currently leads the LongText-Bench leaderboard with a score of 0.966 across 1 evaluated AI models.

Zhipu AIGLM-Image leads with 96.6%.

Progress Over Time

Interactive timeline showing model performance evolution on LongText-Bench

State-of-the-art frontier
Open
Proprietary

LongText-Bench Leaderboard

1 models
ContextCostLicense
1
Zhipu AI
Zhipu AI
16B4K
Notice missing or incorrect data?

FAQ

Common questions about LongText-Bench.

What is the LongText-Bench benchmark?

LongText-Bench evaluates text-to-image models on their ability to accurately render long text passages within generated images. It includes English (EN) and Chinese (ZH) subsets to assess multilingual text rendering capabilities.

What is the LongText-Bench leaderboard?

The LongText-Bench leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, GLM-Image by Zhipu AI leads with a score of 0.966. The average score across all models is 0.966.

What is the highest LongText-Bench score?

The highest LongText-Bench score is 0.966, achieved by GLM-Image from Zhipu AI.

How many models are evaluated on LongText-Bench?

1 models have been evaluated on the LongText-Bench benchmark, with 0 verified results and 1 self-reported results.

What categories does LongText-Bench cover?

LongText-Bench is categorized under image-generation, language, and vision. The benchmark evaluates image models with multilingual support.

More evaluations to explore

Related benchmarks in the same category

View all image-generation
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

language
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

language
99 models
Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

visionmultimodal
74 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

visionmultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

visionmultimodal
47 models
MMLU-Redux

An improved version of the MMLU benchmark featuring manually re-annotated questions to identify and correct errors in the original dataset. Provides more reliable evaluation metrics for language models by addressing dataset quality issues found in the original MMLU.

language
45 models