Winogrande
WinoGrande: An Adversarial Winograd Schema Challenge at Scale. A large-scale dataset of 44,000 pronoun resolution problems designed to test machine commonsense reasoning. Uses adversarial filtering to reduce spurious biases and provides a more robust evaluation of whether AI systems truly understand commonsense or exploit statistical shortcuts. Current best AI methods achieve 59.4-79.1% accuracy, significantly below human performance of 94.0%.
GPT-4 from OpenAI currently leads the Winogrande leaderboard with a score of 0.875 across 21 evaluated AI models.
GPT-4 leads with 87.5%, followed by
Command R+ at 85.4% and
Qwen2 72B Instruct at 85.1%.
Progress Over Time
Interactive timeline showing model performance evolution on Winogrande
Winogrande Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | OpenAI | — | 33K | $30.00 / $60.00 | ||
| 2 | Cohere | 104B | — | — | ||
| 3 | Alibaba Cloud / Qwen Team | 72B | — | — | ||
| 4 | 70B | — | — | |||
| 5 | Google | 27B | — | — | ||
| 6 | Nous Research | 70B | — | — | ||
| 7 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 8 | Microsoft | 60B | — | — | ||
| 9 | Alibaba Cloud / Qwen Team | 32B | — | — | ||
| 10 | Google | 9B | — | — | ||
| 11 | Mistral AI | 12B | — | — | ||
| 12 | Mistral AI | 8B | — | — | ||
| 13 | 8B | — | — | |||
| 14 | Alibaba Cloud / Qwen Team | 7B | — | — | ||
| 15 | 2B | — | — | |||
| 15 | Google | 8B | — | — | ||
| 17 | Microsoft | 4B | — | — | ||
| 18 | Microsoft | 4B | — | — | ||
| 19 | Google | 8B | — | — | ||
| 19 | 2B | — | — | |||
| 21 | Baidu | 21B | — | — |
FAQ
Common questions about Winogrande.
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions