AIME 2025
All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.
Gemini 3 Pro from Google currently leads the AIME 2025 leaderboard with a score of 1.000 across 108 evaluated AI models.
Gemini 3 Pro leads with 100.0%, followed by
GPT-5.2 at 100.0% and
GPT-5.2 Pro at 100.0%.
Progress Over Time
Interactive timeline showing model performance evolution on AIME 2025
AIME 2025 Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Google | — | — | — | ||
| 1 | OpenAI | — | 400K | $1.75 / $14.00 | ||
| 1 | OpenAI | — | — | — | ||
| 1 | xAI | — | — | — | ||
| 1 | Moonshot AI | 1.0T | — | — | ||
| 6 | Anthropic | — | 1.0M | $5.00 / $25.00 | ||
| 7 | Google | — | 1.0M | $0.50 / $3.00 | ||
| 8 | Meituan | 560B | — | — | ||
| 8 | OpenAI | — | — | — | ||
| 10 | 32B | 262K | $0.06 / $0.24 | |||
| 11 | OpenAI | 21B | — | — | ||
| 12 | OpenAI | — | 400K | $1.25 / $10.00 | ||
| 13 | ByteDance | — | — | — | ||
| 14 | StepFun | 196B | 66K | $0.10 / $0.40 | ||
| 15 | Sarvam AI | 30B | — | — | ||
| 15 | Sarvam AI | 105B | — | — | ||
| 15 | OpenAI | — | 400K | $1.25 / $10.00 | ||
| 18 | Moonshot AI | 1.0T | 262K | $0.60 / $3.00 | ||
| 19 | DeepSeek | 685B | — | — | ||
| 20 | Zhipu AI | 358B | 205K | $0.60 / $2.20 | ||
| 21 | OpenAI | — | — | — | ||
| 21 | OpenAI | — | — | — | ||
| 23 | Xiaomi | 309B | — | — | ||
| 24 | OpenAI | — | 400K | $1.25 / $10.00 | ||
| 24 | OpenAI | — | 400K | $1.25 / $10.00 | ||
| 24 | OpenAI | — | 400K | $1.25 / $10.00 | ||
| 27 | Zhipu AI | 357B | 131K | $0.55 / $2.19 | ||
| 28 | xAI | — | 128K | $3.00 / $15.00 | ||
| 29 | DeepSeek | 685B | — | — | ||
| 29 | DeepSeek | 685B | 164K | $0.26 / $0.38 | ||
| 31 | ByteDance | — | — | — | ||
| 32 | LG AI Research | 236B | — | — | ||
| 33 | OpenAI | — | — | — | ||
| 34 | OpenAI | 117B | 131K | $0.10 / $0.50 | ||
| 35 | Alibaba Cloud / Qwen Team | 235B | 262K | $0.30 / $3.00 | ||
| 36 | xAI | — | 2.0M | $0.20 / $0.50 | ||
| 37 | xAI | — | — | — | ||
| 38 | Zhipu AI | 30B | 128K | $0.07 / $0.40 | ||
| 39 | Inception | — | 128K | $0.25 / $0.75 | ||
| 39 | OpenAI | — | 400K | $0.25 / $2.00 | ||
| 41 | xAI | — | — | — | ||
| 42 | Meituan | 560B | — | — | ||
| 43 | 120B | — | — | |||
| 44 | Alibaba Cloud / Qwen Team | 236B | 262K | $0.45 / $3.49 | ||
| 45 | DeepSeek | 685B | — | — | ||
| 46 | OpenAI | — | — | — | ||
| 47 | — | — | — | |||
| 48 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 49 | StepFun | 10B | — | — | ||
| 50 | DeepSeek | 671B | 131K | $0.55 / $2.19 |
FAQ
Common questions about AIME 2025.
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions
LiveCodeBench is a holistic and contamination-free evaluation benchmark for large language models for code. It continuously collects new problems from programming contests (LeetCode, AtCoder, CodeForces) and evaluates four different scenarios: code generation, self-repair, code execution, and test output prediction. Problems are annotated with release dates to enable evaluation on unseen problems released after a model's training cutoff.