French MMLU
French version of MMLU-Pro, a multilingual benchmark for evaluating language models' cross-lingual reasoning capabilities across 14 diverse domains including mathematics, physics, chemistry, law, engineering, psychology, and health.
Progress Over Time
Interactive timeline showing model performance evolution on French MMLU
State-of-the-art frontier
Open
Proprietary
French MMLU Leaderboard
1 models
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Mistral AI | 8B | 128K | $0.10 / $0.10 |
Notice missing or incorrect data?
FAQ
Common questions about French MMLU
French version of MMLU-Pro, a multilingual benchmark for evaluating language models' cross-lingual reasoning capabilities across 14 diverse domains including mathematics, physics, chemistry, law, engineering, psychology, and health.
The French MMLU paper is available at https://arxiv.org/abs/2503.10497. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The French MMLU leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Ministral 8B Instruct by Mistral AI leads with a score of 0.575. The average score across all models is 0.575.
The highest French MMLU score is 0.575, achieved by Ministral 8B Instruct from Mistral AI.
1 models have been evaluated on the French MMLU benchmark, with 0 verified results and 1 self-reported results.
French MMLU is categorized under general, language, legal, and reasoning. The benchmark evaluates text models with multilingual support.