Wild Bench
WildBench is an automated evaluation framework that benchmarks large language models using 1,024 challenging, real-world tasks selected from over one million human-chatbot conversation logs. It introduces two evaluation metrics (WB-Reward and WB-Score) that achieve high correlation with human preferences and uses task-specific checklists for systematic evaluation.
Mistral Large 3 from Mistral AI currently leads the Wild Bench leaderboard with a score of 0.685 across 8 evaluated AI models.
Mistral Large 3 leads with 68.5%, followed by
MiniStral 3 (14B Instruct 2512) at 68.5% and
Ministral 3 (8B Instruct 2512) at 66.8%.
Progress Over Time
Interactive timeline showing model performance evolution on Wild Bench
Wild Bench Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Mistral AI | 675B | 128K | $2.00 / $5.00 | ||
| 1 | Mistral AI | 14B | — | — | ||
| 3 | Mistral AI | 8B | — | — | ||
| 4 | Mistral AI | 24B | — | — | ||
| 5 | Mistral AI | 3B | — | — | ||
| 6 | Mistral AI | 24B | 32K | $0.07 / $0.14 | ||
| 7 | AI21 Labs | 398B | 256K | $2.00 / $8.00 | ||
| 8 | AI21 Labs | 52B | 256K | $0.20 / $0.40 |
FAQ
Common questions about Wild Bench.
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions