Claw-Eval
Claw-Eval tests real-world agentic task completion across complex multi-step scenarios, evaluating a model's ability to use tools, navigate environments, and complete end-to-end tasks autonomously.
Kimi K2.6 from Moonshot AI currently leads the Claw-Eval leaderboard with a score of 0.809 across 7 evaluated AI models.
Kimi K2.6 leads with 80.9%, followed by
GLM-5V-Turbo at 75.0% and
MiMo-V2-Pro at 61.5%.
Progress Over Time
Interactive timeline showing model performance evolution on Claw-Eval
Claw-Eval Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Moonshot AI | 1.0T | 262K | $0.95 / $4.00 | ||
| 2 | Zhipu AI | — | — | — | ||
| 3 | Xiaomi | 1.0T | 1.0M | $1.00 / $3.00 | ||
| 4 | Alibaba Cloud / Qwen Team | 28B | 262K | $0.60 / $3.60 | ||
| 5 | Alibaba Cloud / Qwen Team | — | 1.0M | $0.50 / $3.00 | ||
| 6 | Xiaomi | — | 262K | $0.40 / $2.00 | ||
| 7 | Alibaba Cloud / Qwen Team | 35B | — | — |
FAQ
Common questions about Claw-Eval.
More evaluations to explore
Related benchmarks in the same category
BrowseComp is a benchmark comprising 1,266 questions that challenge AI agents to persistently navigate the internet in search of hard-to-find, entangled information. The benchmark measures agents' ability to exercise persistence in information gathering, demonstrate creativity in web navigation, and find concise, verifiable answers. Despite the difficulty of the questions, BrowseComp is simple and easy-to-use, as predicted answers are short and easily verifiable against reference answers.
Terminal-Bench 2.0 is an updated benchmark for testing AI agents' tool use ability to operate a computer via terminal. It evaluates how well models can handle real-world, end-to-end tasks autonomously, including compiling code, training models, setting up servers, system administration, security tasks, data science workflows, and cybersecurity vulnerabilities.
Terminal-Bench is a benchmark for testing AI agents in real terminal environments. It evaluates how well agents can handle real-world, end-to-end tasks autonomously, including compiling code, training models, setting up servers, system administration, security tasks, data science workflows, and cybersecurity vulnerabilities. The benchmark consists of a dataset of ~100 hand-crafted, human-verified tasks and an execution harness that connects language models to a terminal sandbox.
t2-bench is a benchmark for evaluating agentic tool use capabilities, measuring how well models can select, sequence, and utilize tools to solve complex tasks. It tests autonomous planning and execution in multi-step scenarios.
SWE-Bench Pro is an advanced version of SWE-Bench that evaluates language models on complex, real-world software engineering tasks requiring extended reasoning and multi-step problem solving.
Berkeley Function Calling Leaderboard v3 (BFCL-v3) is an advanced benchmark that evaluates large language models' function calling capabilities through multi-turn and multi-step interactions. It introduces extended conversational exchanges where models must retain contextual information across turns and execute multiple internal function calls for complex user requests. The benchmark includes 1000 test cases across domains like vehicle control, trading bots, travel booking, and file system management, using state-based evaluation to verify both system state changes and execution path correctness.