Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

Claude Mythos Preview from Anthropic currently leads the Humanity's Last Exam leaderboard with a score of 0.647 across 74 evaluated AI models.

AnthropicClaude Mythos Preview leads with 64.7%, followed by MetaMuse Spark at 58.4% and OpenAIGPT-5.5 Pro at 57.2%.

Progress Over Time

Interactive timeline showing model performance evolution on Humanity's Last Exam

State-of-the-art frontier
Open
Proprietary

Humanity's Last Exam Leaderboard

74 models
ContextCostLicense
1
2
3
41.0M$5.00 / $25.00
51.0M$5.00 / $25.00
6
Zhipu AI
Zhipu AI
754B200K$1.40 / $4.40
7
OpenAI
OpenAI
1.1M$5.00 / $30.00
81.0M$2.50 / $15.00
91.0T
10
11
Moonshot AI
Moonshot AI
1.0T
12200K$3.00 / $15.00
13
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
27B262K$0.30 / $2.40
141.6T1.0M$1.74 / $3.48
15
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
122B262K$0.40 / $3.20
16
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B262K$0.25 / $2.00
17
18284B1.0M$0.14 / $0.28
191.0M$0.50 / $3.00
20
Zhipu AI
Zhipu AI
358B205K$0.60 / $2.20
21685B164K$0.26 / $0.38
22
23
OpenAI
OpenAI
1.0M$2.50 / $15.00
24
25
26
Moonshot AI
Moonshot AI
1.0T262K$0.95 / $4.00
27
OpenAI
OpenAI
400K$1.75 / $14.00
28685B
29
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
1.0M$0.50 / $3.00
30
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
397B262K$0.60 / $3.60
31400K$0.75 / $4.50
3231B262K$0.14 / $0.40
33560B
34685B
35
OpenAI
OpenAI
36400K$0.20 / $1.25
37
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
28B262K$0.60 / $3.60
38120B
39309B
40230B1.0M$0.30 / $1.20
41
42
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B
432.0M$0.20 / $0.50
44685B
45
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
235B
461.0M$1.25 / $10.00
47671B131K$0.55 / $2.19
4825B262K$0.13 / $0.40
48
Zhipu AI
Zhipu AI
357B
50400K$0.25 / $2.00
150 of 74
1/2
Notice missing or incorrect data?

FAQ

Common questions about Humanity's Last Exam.

What is the Humanity's Last Exam benchmark?

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

What is the Humanity's Last Exam leaderboard?

The Humanity's Last Exam leaderboard ranks 74 AI models based on their performance on this benchmark. Currently, Claude Mythos Preview by Anthropic leads with a score of 0.647. The average score across all models is 0.278.

What is the highest Humanity's Last Exam score?

The highest Humanity's Last Exam score is 0.647, achieved by Claude Mythos Preview from Anthropic.

How many models are evaluated on Humanity's Last Exam?

74 models have been evaluated on the Humanity's Last Exam benchmark, with 0 verified results and 74 self-reported results.

Where can I find the Humanity's Last Exam paper?

The Humanity's Last Exam paper is available at https://arxiv.org/abs/2501.14249. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does Humanity's Last Exam cover?

Humanity's Last Exam is categorized under math, reasoning, and vision. The benchmark evaluates multimodal models.

More evaluations to explore

Related benchmarks in the same category

View all math
GPQA

A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.

reasoning
214 models
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

math
119 models
AIME 2025

All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.

math
108 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

math
99 models
SWE-Bench Verified

A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.

reasoning
89 models
LiveCodeBench

LiveCodeBench is a holistic and contamination-free evaluation benchmark for large language models for code. It continuously collects new problems from programming contests (LeetCode, AtCoder, CodeForces) and evaluates four different scenarios: code generation, self-repair, code execution, and test output prediction. Problems are annotated with release dates to enable evaluation on unseen problems released after a model's training cutoff.

reasoning
71 models