MultiPL-E
MultiPL-E is a scalable and extensible system for translating unit test-driven code generation benchmarks to multiple programming languages. It extends HumanEval and MBPP Python benchmarks to 18 additional programming languages, enabling evaluation of neural code generation models across diverse programming paradigms and language features.
Progress Over Time
Interactive timeline showing model performance evolution on MultiPL-E
State-of-the-art frontier
Open
Proprietary
MultiPL-E Leaderboard
13 models
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Alibaba Cloud / Qwen Team | 235B | 262K | $0.15 / $0.80 | ||
| 2 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 3 | Alibaba Cloud / Qwen Team | 236B | 262K | $0.30 / $1.50 | ||
| 4 | Moonshot AI | 1.0T | 200K | $0.50 / $0.50 | ||
| 4 | Moonshot AI | 1.0T | — | — | ||
| 6 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 7 | Alibaba Cloud / Qwen Team | 73B | 131K | $0.35 / $0.40 | ||
| 8 | Alibaba Cloud / Qwen Team | 15B | — | — | ||
| 9 | Alibaba Cloud / Qwen Team | 8B | 131K | $0.30 / $0.30 | ||
| 10 | Alibaba Cloud / Qwen Team | 72B | — | — | ||
| 11 | Alibaba Cloud / Qwen Team | 235B | 128K | $0.10 / $0.10 | ||
| 12 | Alibaba Cloud / Qwen Team | 7B | — | — | ||
| 13 | Alibaba Cloud / Qwen Team | 8B | — | — |
Notice missing or incorrect data?
FAQ
Common questions about MultiPL-E
MultiPL-E is a scalable and extensible system for translating unit test-driven code generation benchmarks to multiple programming languages. It extends HumanEval and MBPP Python benchmarks to 18 additional programming languages, enabling evaluation of neural code generation models across diverse programming paradigms and language features.
The MultiPL-E paper is available at https://arxiv.org/abs/2208.08227. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The MultiPL-E leaderboard ranks 13 AI models based on their performance on this benchmark. Currently, Qwen3-235B-A22B-Instruct-2507 by Alibaba Cloud / Qwen Team leads with a score of 0.879. The average score across all models is 0.759.
The highest MultiPL-E score is 0.879, achieved by Qwen3-235B-A22B-Instruct-2507 from Alibaba Cloud / Qwen Team.
13 models have been evaluated on the MultiPL-E benchmark, with 0 verified results and 13 self-reported results.
MultiPL-E is categorized under general and language. The benchmark evaluates text models with multilingual support.