MMMLU

Multilingual Massive Multitask Language Understanding dataset released by OpenAI, featuring professionally translated MMLU test questions across 14 languages including Arabic, Bengali, German, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Swahili, Yoruba, and Chinese. Contains approximately 15,908 multiple-choice questions per language covering 57 subjects.

Paper

Progress Over Time

Interactive timeline showing model performance evolution on MMMLU

State-of-the-art frontier
Open
Proprietary

MMMLU Leaderboard

45 models
ContextCostLicense
1$25.00 / $125.00
21.0M$2.50 / $15.00
3
31.0M$0.50 / $3.00
5
Anthropic
Anthropic
1.0M$5.00 / $25.00
61.0M$5.00 / $25.00
7200K$5.00 / $25.00
8
OpenAI
OpenAI
400K$1.75 / $14.00
9200K$15.00 / $75.00
9
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
11200K$3.00 / $15.00
12200K$3.00 / $15.00
131.0M$0.25 / $1.50
14
Anthropic
Anthropic
15
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
397B262K$0.60 / $3.60
1631B262K$0.14 / $0.40
17
OpenAI
OpenAI
200K$15.00 / $60.00
18
OpenAI
OpenAI
1.0M$2.00 / $8.00
19
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
122B262K$0.40 / $3.20
19
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
235B128K$0.10 / $0.10
21
2225B262K$0.13 / $0.40
23200K$3.00 / $15.00
24
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
27B262K$0.30 / $2.40
25
LG AI Research
LG AI Research
236B33K$0.60 / $1.00
26675B
26675B262K$0.50 / $1.50
26675B
26675B
30
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B262K$0.25 / $2.00
31
OpenAI
OpenAI
128K$75.00 / $150.00
32117B131K$0.10 / $0.50
33200K$1.00 / $5.00
34
OpenAI
OpenAI
128K$2.50 / $10.00
35
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
9B
361.0M$0.40 / $1.60
378B
38
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
4B
39
Mistral AI
Mistral AI
675B128K$2.00 / $5.00
4060B
415B
421.0M$0.10 / $0.40
43
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
2B
444B128K$0.10 / $0.10
45
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
800M
Notice missing or incorrect data?

FAQ

Common questions about MMMLU

Multilingual Massive Multitask Language Understanding dataset released by OpenAI, featuring professionally translated MMLU test questions across 14 languages including Arabic, Bengali, German, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Swahili, Yoruba, and Chinese. Contains approximately 15,908 multiple-choice questions per language covering 57 subjects.
The MMMLU paper is available at https://arxiv.org/abs/2009.03300. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The MMMLU leaderboard ranks 45 AI models based on their performance on this benchmark. Currently, Claude Mythos Preview by Anthropic leads with a score of 0.927. The average score across all models is 0.830.
The highest MMMLU score is 0.927, achieved by Claude Mythos Preview from Anthropic.
45 models have been evaluated on the MMMLU benchmark, with 0 verified results and 45 self-reported results.
MMMLU is categorized under general, language, math, and reasoning. The benchmark evaluates text models with multilingual support.