AIME 2024
American Invitational Mathematics Examination 2024, consisting of 30 challenging mathematical reasoning problems from AIME I and AIME II competitions. Each problem requires an integer answer between 0-999 and tests advanced mathematical reasoning across algebra, geometry, combinatorics, and number theory. Used as a benchmark for evaluating mathematical reasoning capabilities in large language models at Olympiad-level difficulty.
Grok-3 Mini from xAI currently leads the AIME 2024 leaderboard with a score of 0.958 across 53 evaluated AI models.
Grok-3 Mini leads with 95.8%, followed by
o4-mini at 93.4% and
Grok-3 at 93.3%.
Progress Over Time
Interactive timeline showing model performance evolution on AIME 2024
AIME 2024 Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | xAI | — | — | — | ||
| 2 | OpenAI | — | — | — | ||
| 3 | xAI | — | 128K | $3.00 / $15.00 | ||
| 3 | Meituan | 560B | — | — | ||
| 5 | Google | — | 1.0M | $1.25 / $10.00 | ||
| 6 | OpenAI | — | — | — | ||
| 7 | DeepSeek | 671B | 131K | $0.55 / $2.19 | ||
| 8 | Zhipu AI | 355B | — | — | ||
| 9 | Mistral AI | 14B | — | — | ||
| 10 | Zhipu AI | 106B | — | — | ||
| 11 | Google | — | 1.0M | $0.30 / $2.50 | ||
| 12 | OpenAI | — | — | — | ||
| 13 | DeepSeek | 671B | — | — | ||
| 13 | DeepSeek | 71B | — | — | ||
| 15 | OpenAI | — | — | — | ||
| 15 | Mistral AI | 8B | — | — | ||
| 15 | MiniMax | 456B | — | — | ||
| 18 | Alibaba Cloud / Qwen Team | 235B | — | — | ||
| 19 | OpenBMB | 9B | — | — | ||
| 20 | DeepSeek | 33B | — | — | ||
| 20 | DeepSeek | 8B | — | — | ||
| 20 | MiniMax | 456B | — | — | ||
| 23 | Alibaba Cloud / Qwen Team | 33B | 128K | $0.10 / $0.30 | ||
| 24 | Microsoft | 14B | — | — | ||
| 25 | 8B | — | — | |||
| 25 | 8B | — | — | |||
| 27 | Alibaba Cloud / Qwen Team | 31B | 128K | $0.10 / $0.30 | ||
| 28 | DeepSeek | 15B | — | — | ||
| 28 | Anthropic | — | — | — | ||
| 28 | DeepSeek | 8B | — | — | ||
| 31 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 32 | Mistral AI | 3B | — | — | ||
| 32 | Moonshot AI | — | — | — | ||
| 34 | Microsoft | 14B | — | — | ||
| 35 | OpenAI | — | — | — | ||
| 36 | Mistral AI | 24B | — | — | ||
| 37 | — | — | — | |||
| 38 | Meituan | 69B | 256K | $0.10 / $0.40 | ||
| 39 | Moonshot AI | 1.0T | — | — | ||
| 40 | Mistral AI | 24B | — | — | ||
| 41 | Moonshot AI | 1.0T | — | — | ||
| 41 | Moonshot AI | 1.0T | — | — | ||
| 43 | DeepSeek | 671B | — | — | ||
| 44 | DeepSeek | 671B | 164K | $0.28 / $1.14 | ||
| 45 | DeepSeek | 2B | — | — | ||
| 46 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 47 | OpenAI | — | 1.0M | $0.40 / $1.60 | ||
| 48 | OpenAI | — | 1.0M | $2.00 / $8.00 | ||
| 49 | OpenAI | — | — | — | ||
| 50 | DeepSeek | 671B | — | — |
FAQ
Common questions about AIME 2024.
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions