SuperGPQA
SuperGPQA is a comprehensive benchmark that evaluates large language models across 285 graduate-level academic disciplines. The benchmark contains 25,957 questions covering 13 broad disciplinary areas including Engineering, Medicine, Science, and Law, with specialized fields in light industry, agriculture, and service-oriented domains. It employs a Human-LLM collaborative filtering mechanism with over 80 expert annotators to create challenging questions that assess graduate-level knowledge and reasoning capabilities.
Qwen3.6 Plus from Alibaba Cloud / Qwen Team currently leads the SuperGPQA leaderboard with a score of 0.716 across 30 evaluated AI models.
Qwen3.6 Plus leads with 71.6%, followed by
Qwen3.5-397B-A17B at 70.4% and
Qwen3.5-122B-A10B at 67.1%.
Progress Over Time
Interactive timeline showing model performance evolution on SuperGPQA
SuperGPQA Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Alibaba Cloud / Qwen Team | — | 1.0M | $0.50 / $3.00 | ||
| 2 | Alibaba Cloud / Qwen Team | 397B | 262K | $0.60 / $3.60 | ||
| 3 | Alibaba Cloud / Qwen Team | 122B | 262K | $0.40 / $3.20 | ||
| 4 | Alibaba Cloud / Qwen Team | 28B | 262K | $0.60 / $3.60 | ||
| 5 | Alibaba Cloud / Qwen Team | 27B | 262K | $0.30 / $2.40 | ||
| 6 | Alibaba Cloud / Qwen Team | 1.0T | 256K | $0.50 / $5.00 | ||
| 7 | Alibaba Cloud / Qwen Team | 235B | 262K | $0.30 / $3.00 | ||
| 8 | Alibaba Cloud / Qwen Team | 35B | — | — | ||
| 9 | Alibaba Cloud / Qwen Team | 236B | 262K | $0.45 / $3.49 | ||
| 10 | Alibaba Cloud / Qwen Team | 35B | 262K | $0.25 / $2.00 | ||
| 11 | Alibaba Cloud / Qwen Team | 235B | 262K | $0.15 / $0.80 | ||
| 12 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 13 | Alibaba Cloud / Qwen Team | 236B | 262K | $0.30 / $1.49 | ||
| 14 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 15 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 16 | Alibaba Cloud / Qwen Team | 9B | — | — | ||
| 17 | Moonshot AI | 1.0T | 200K | $0.50 / $0.50 | ||
| 17 | Moonshot AI | 1.0T | — | — | ||
| 19 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $1.00 | ||
| 20 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 21 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $0.70 | ||
| 22 | Alibaba Cloud / Qwen Team | 4B | — | — | ||
| 23 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.18 / $2.09 | ||
| 24 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $1.00 | ||
| 25 | Moonshot AI | 1.0T | — | — | ||
| 26 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.08 / $0.50 | ||
| 27 | Alibaba Cloud / Qwen Team | 235B | 128K | $0.10 / $0.10 | ||
| 28 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $0.60 | ||
| 29 | Alibaba Cloud / Qwen Team | 2B | — | — | ||
| 30 | Alibaba Cloud / Qwen Team | 800M | — | — |
FAQ
Common questions about SuperGPQA.
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions