Benchmarks/math/MathArena Apex

MathArena Apex

MathArena Apex is a challenging math contest benchmark featuring the most difficult mathematical problems designed to test advanced reasoning and problem-solving abilities of AI models. It focuses on olympiad-level mathematics and complex multi-step mathematical reasoning.

Progress Over Time

Interactive timeline showing model performance evolution on MathArena Apex

State-of-the-art frontier
Open
Proprietary

MathArena Apex Leaderboard

1 models • 0 verified
ContextCostLicense
1
Notice missing or incorrect data?

FAQ

Common questions about MathArena Apex

MathArena Apex is a challenging math contest benchmark featuring the most difficult mathematical problems designed to test advanced reasoning and problem-solving abilities of AI models. It focuses on olympiad-level mathematics and complex multi-step mathematical reasoning.
The MathArena Apex leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Gemini 3 Pro by Google leads with a score of 0.234. The average score across all models is 0.234.
The highest MathArena Apex score is 0.234, achieved by Gemini 3 Pro from Google.
1 models have been evaluated on the MathArena Apex benchmark, with 0 verified results and 1 self-reported results.
MathArena Apex is categorized under math and reasoning. The benchmark evaluates text models.