Terminal-Bench
Terminal-Bench is a benchmark for testing AI agents in real terminal environments. It evaluates how well agents can handle real-world, end-to-end tasks autonomously, including compiling code, training models, setting up servers, system administration, security tasks, data science workflows, and cybersecurity vulnerabilities. The benchmark consists of a dataset of ~100 hand-crafted, human-verified tasks and an execution harness that connects language models to a terminal sandbox.
Claude Sonnet 4.5 from Anthropic currently leads the Terminal-Bench leaderboard with a score of 0.500 across 23 evaluated AI models.
Claude Sonnet 4.5 leads with 50.0%, followed by
MiniMax M2.1 at 47.9% and Kimi K2-Thinking-0905 at 47.1%.
Progress Over Time
Interactive timeline showing model performance evolution on Terminal-Bench
Terminal-Bench Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 2 | MiniMax | 230B | 1.0M | $0.30 / $1.20 | ||
| 3 | Moonshot AI | 1.0T | — | — | ||
| 4 | MiniMax | 230B | 1.0M | $0.30 / $1.20 | ||
| 5 | Anthropic | — | — | — | ||
| 6 | Anthropic | — | 200K | $1.00 / $5.00 | ||
| 7 | Zhipu AI | 357B | — | — | ||
| 8 | Meituan | 560B | 128K | $0.30 / $1.20 | ||
| 9 | Anthropic | — | — | — | ||
| 10 | DeepSeek | 685B | — | — | ||
| 11 | Zhipu AI | 355B | — | — | ||
| 12 | Anthropic | — | — | — | ||
| 13 | Anthropic | — | — | — | ||
| 14 | Meituan | 69B | 256K | $0.10 / $0.40 | ||
| 15 | Zhipu AI | 358B | 205K | $0.60 / $2.20 | ||
| 16 | DeepSeek | 671B | — | — | ||
| 17 | Xiaomi | 309B | — | — | ||
| 18 | Zhipu AI | 106B | — | — | ||
| 18 | Moonshot AI | 1.0T | — | — | ||
| 20 | 120B | — | — | |||
| 21 | Moonshot AI | 1.0T | — | — | ||
| 22 | 32B | 262K | $0.06 / $0.24 | |||
| 23 | DeepSeek | 671B | 131K | $0.55 / $2.19 |
FAQ
Common questions about Terminal-Bench.
Sub-benchmarks
More evaluations to explore
Related benchmarks in the same category
A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.
A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.
All 30 problems from the 2025 American Invitational Mathematics Examination (AIME I and AIME II), testing olympiad-level mathematical reasoning with integer answers from 000-999. Used as an AI benchmark to evaluate large language models' ability to solve complex mathematical problems requiring multi-step logical deductions and structured symbolic reasoning.
Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains
A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions