AlpacaEval 2.0
AlpacaEval 2.0 is a length-controlled automatic evaluator for instruction-following language models that uses GPT-4 Turbo to assess model responses against a baseline. It evaluates models on 805 diverse instruction-following tasks including creative writing, classification, programming, and general knowledge questions. The benchmark achieves 0.98 Spearman correlation with ChatBot Arena while being fast (< 3 minutes) and affordable (< $10 in OpenAI credits). It addresses length bias in automatic evaluation through length-controlled win-rates and uses weighted scoring based on response quality.
Granite 3.3 8B Base from IBM currently leads the AlpacaEval 2.0 leaderboard with a score of 0.627 across 4 evaluated AI models.
Granite 3.3 8B Base leads with 62.7%, followed by
Granite 3.3 8B Instruct at 62.7% and
DeepSeek-V2.5 at 50.5%.
Progress Over Time
Interactive timeline showing model performance evolution on AlpacaEval 2.0
AlpacaEval 2.0 Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | 8B | — | — | |||
| 1 | 8B | 128K | $0.50 / $0.50 | |||
| 3 | DeepSeek | 236B | 8K | $0.14 / $0.28 | ||
| 4 | 7B | — | — |
FAQ
Common questions about AlpacaEval 2.0.
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions