HMMT 2025
Harvard-MIT Mathematics Tournament 2025 - A prestigious student-organized mathematics competition for high school students featuring two tournaments (November 2025 at MIT and February 2026 at Harvard) with individual tests, team rounds, and guts rounds
Progress Over Time
Interactive timeline showing model performance evolution on HMMT 2025
State-of-the-art frontier
Open
Proprietary
HMMT 2025 Leaderboard
27 models • 0 verified
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
1 | OpenAI | — | 400K | $21.00 $168.00 | ||
2 | OpenAI | — | 400K | $1.75 $14.00 | ||
3 | DeepSeek | 685B | — | — | ||
4 | Moonshot AI | 1.0T | — | — | ||
5 | Moonshot AI | 1.0T | 262K | $0.60 $2.50 | ||
6 | Alibaba Cloud / Qwen Team | 397B | 262K | $0.60 $3.60 | ||
7 | 120B | 262K | $0.10 $0.50 | |||
8 | OpenAI | — | 400K | $1.25 $10.00 | ||
8 | xAI | — | 2.0M | $0.20 $0.50 | ||
10 | Alibaba Cloud / Qwen Team | 27B | — | — | ||
11 | Alibaba Cloud / Qwen Team | 122B | 262K | $0.40 $3.20 | ||
12 | DeepSeek | 685B | — | — | ||
13 | Alibaba Cloud / Qwen Team | 35B | 262K | $0.25 $2.00 | ||
14 | OpenAI | — | 400K | $0.25 $2.00 | ||
15 | Sarvam AI | 105B | — | — | ||
16 | Xiaomi | 309B | 256K | $0.10 $0.30 | ||
17 | DeepSeek | 685B | — | — | ||
18 | Alibaba Cloud / Qwen Team | 9B | — | — | ||
19 | DeepSeek | 671B | 131K | $0.50 $2.15 | ||
20 | OpenAI | — | 400K | $0.05 $0.40 | ||
21 | Alibaba Cloud / Qwen Team | 4B | — | — | ||
22 | Sarvam AI | 30B | — | — | ||
23 | Moonshot AI | 1.0T | 200K | $0.50 $0.50 | ||
23 | Moonshot AI | 1.0T | — | — | ||
25 | OpenAI | — | 1.0M | $0.40 $1.60 | ||
26 | DeepSeek | 671B | 164K | $0.27 $1.00 | ||
27 | OpenAI | — | 1.0M | $2.00 $8.00 |
Notice missing or incorrect data?
FAQ
Common questions about HMMT 2025
Harvard-MIT Mathematics Tournament 2025 - A prestigious student-organized mathematics competition for high school students featuring two tournaments (November 2025 at MIT and February 2026 at Harvard) with individual tests, team rounds, and guts rounds
The HMMT 2025 paper is available at http://web.mit.edu/HMMT/www/. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The HMMT 2025 leaderboard ranks 27 AI models based on their performance on this benchmark. Currently, GPT-5.2 Pro by OpenAI leads with a score of 1.000. The average score across all models is 0.790.
The highest HMMT 2025 score is 1.000, achieved by GPT-5.2 Pro from OpenAI.
27 models have been evaluated on the HMMT 2025 benchmark, with 0 verified results and 27 self-reported results.
HMMT 2025 is categorized under math. The benchmark evaluates text models.