Benchmarks/math/HMMT 2025

HMMT 2025

Harvard-MIT Mathematics Tournament 2025 - A prestigious student-organized mathematics competition for high school students featuring two tournaments (November 2025 at MIT and February 2026 at Harvard) with individual tests, team rounds, and guts rounds

Paper

Progress Over Time

Interactive timeline showing model performance evolution on HMMT 2025

State-of-the-art frontier
Open
Proprietary

HMMT 2025 Leaderboard

27 models • 0 verified
ContextCostLicense
1
400K
$21.00
$168.00
2
OpenAI
OpenAI
400K
$1.75
$14.00
3
685B
4
1.0T
5
Moonshot AI
Moonshot AI
1.0T262K
$0.60
$2.50
6
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
397B262K
$0.60
$3.60
7
120B262K
$0.10
$0.50
8
OpenAI
OpenAI
400K
$1.25
$10.00
8
2.0M
$0.20
$0.50
10
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
27B
11
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
122B262K
$0.40
$3.20
12
685B
13
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B262K
$0.25
$2.00
14
400K
$0.25
$2.00
15
Sarvam AI
Sarvam AI
105B
16
309B256K
$0.10
$0.30
17
685B
18
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
9B
19
671B131K
$0.50
$2.15
20
400K
$0.05
$0.40
21
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
4B
22
Sarvam AI
Sarvam AI
30B
23
Moonshot AI
Moonshot AI
1.0T200K
$0.50
$0.50
23
1.0T
25
1.0M
$0.40
$1.60
26
671B164K
$0.27
$1.00
27
OpenAI
OpenAI
1.0M
$2.00
$8.00
Notice missing or incorrect data?

FAQ

Common questions about HMMT 2025

Harvard-MIT Mathematics Tournament 2025 - A prestigious student-organized mathematics competition for high school students featuring two tournaments (November 2025 at MIT and February 2026 at Harvard) with individual tests, team rounds, and guts rounds
The HMMT 2025 paper is available at http://web.mit.edu/HMMT/www/. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The HMMT 2025 leaderboard ranks 27 AI models based on their performance on this benchmark. Currently, GPT-5.2 Pro by OpenAI leads with a score of 1.000. The average score across all models is 0.790.
The highest HMMT 2025 score is 1.000, achieved by GPT-5.2 Pro from OpenAI.
27 models have been evaluated on the HMMT 2025 benchmark, with 0 verified results and 27 self-reported results.
HMMT 2025 is categorized under math. The benchmark evaluates text models.