HumanEval
A benchmark that measures functional correctness for synthesizing programs from docstrings, consisting of 164 original programming problems assessing language comprehension, algorithms, and simple mathematics
MiniCPM-SALA from OpenBMB currently leads the HumanEval leaderboard with a score of 0.951 across 66 evaluated AI models.
MiniCPM-SALA leads with 95.1%, followed by Kimi K2 0905 at 94.5% and
Claude 3.5 Sonnet at 93.7%.
Progress Over Time
Interactive timeline showing model performance evolution on HumanEval
HumanEval Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | OpenBMB | 9B | — | — | ||
| 2 | Moonshot AI | 1.0T | 262K | $0.60 / $2.50 | ||
| 3 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 4 | OpenAI | — | — | — | ||
| 5 | Moonshot AI | 1.0T | 200K | $0.50 / $0.50 | ||
| 6 | Alibaba Cloud / Qwen Team | 32B | 128K | $0.09 / $0.09 | ||
| 7 | OpenAI | — | 128K | $3.00 / $12.00 | ||
| 8 | Sarvam AI | 30B | — | — | ||
| 9 | Mistral AI | 123B | 128K | $2.00 / $6.00 | ||
| 9 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 11 | Alibaba Cloud / Qwen Team | 34B | — | — | ||
| 12 | OpenAI | — | 128K | $2.50 / $10.00 | ||
| 13 | 8B | 128K | $0.50 / $0.50 | |||
| 13 | 8B | — | — | |||
| 15 | Google | — | — | — | ||
| 16 | 405B | 128K | $0.89 / $0.89 | |||
| 16 | DeepSeek | 236B | 8K | $0.14 / $0.28 | ||
| 16 | Amazon | — | 300K | $0.80 / $3.20 | ||
| 19 | Meituan | 560B | 128K | $0.30 / $1.20 | ||
| 19 | Mistral AI | 24B | — | — | ||
| 21 | Alibaba Cloud / Qwen Team | 7B | — | — | ||
| 21 | xAI | — | 128K | $2.00 / $10.00 | ||
| 21 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 21 | 70B | 128K | $0.20 / $0.20 | |||
| 25 | Anthropic | — | 200K | $0.80 / $4.00 | ||
| 25 | OpenAI | — | 200K | $15.00 / $60.00 | ||
| 27 | OpenAI | — | 128K | $75.00 / $150.00 | ||
| 28 | Google | 27B | 131K | $0.10 / $0.20 | ||
| 29 | OpenAI | — | 128K | $0.15 / $0.60 | ||
| 30 | OpenAI | — | 128K | $10.00 / $30.00 | ||
| 31 | Alibaba Cloud / Qwen Team | 73B | 131K | $0.35 / $0.40 | ||
| 32 | Alibaba Cloud / Qwen Team | 72B | — | — | ||
| 33 | xAI | — | — | — | ||
| 34 | Google | 12B | 131K | $0.05 / $0.10 | ||
| 34 | Amazon | — | 300K | $0.06 / $0.24 | ||
| 36 | Anthropic | — | 200K | $15.00 / $75.00 | ||
| 37 | Mistral AI | 24B | 32K | $0.07 / $0.14 | ||
| 37 | Alibaba Cloud / Qwen Team | 8B | 131K | $0.30 / $0.30 | ||
| 39 | Google | — | 2.1M | $2.50 / $10.00 | ||
| 40 | Alibaba Cloud / Qwen Team | 15B | — | — | ||
| 41 | Microsoft | 15B | 16K | $0.07 / $0.14 | ||
| 42 | 7B | — | — | |||
| 43 | Amazon | — | 128K | $0.03 / $0.14 | ||
| 43 | Mistral AI | 22B | — | — | ||
| 45 | 70B | 128K | $0.20 / $0.20 | |||
| 46 | Alibaba Cloud / Qwen Team | 8B | — | — | ||
| 47 | Alibaba Cloud / Qwen Team | 7B | — | — | ||
| 48 | Anthropic | — | 200K | $0.25 / $1.25 | ||
| 49 | 2B | — | — | |||
| 49 | Google | 8B | 32K | $20.00 / $40.00 |
FAQ
Common questions about HumanEval.
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions