Claw-Eval

Claw-Eval tests real-world agentic task completion across complex multi-step scenarios, evaluating a model's ability to use tools, navigate environments, and complete end-to-end tasks autonomously.

Kimi K2.6 from Moonshot AI currently leads the Claw-Eval leaderboard with a score of 0.809 across 7 evaluated AI models.

Moonshot AIKimi K2.6 leads with 80.9%, followed by Zhipu AIGLM-5V-Turbo at 75.0% and XiaomiMiMo-V2-Pro at 61.5%.

Progress Over Time

Interactive timeline showing model performance evolution on Claw-Eval

State-of-the-art frontier
Open
Proprietary

Claw-Eval Leaderboard

7 models
ContextCostLicense
1
Moonshot AI
Moonshot AI
1.0T262K$0.95 / $4.00
2
Zhipu AI
Zhipu AI
31.0T1.0M$1.00 / $3.00
4
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
28B262K$0.60 / $3.60
5
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
1.0M$0.50 / $3.00
6262K$0.40 / $2.00
7
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B
Notice missing or incorrect data?

FAQ

Common questions about Claw-Eval.

What is the Claw-Eval benchmark?

Claw-Eval tests real-world agentic task completion across complex multi-step scenarios, evaluating a model's ability to use tools, navigate environments, and complete end-to-end tasks autonomously.

What is the Claw-Eval leaderboard?

The Claw-Eval leaderboard ranks 7 AI models based on their performance on this benchmark. Currently, Kimi K2.6 by Moonshot AI leads with a score of 0.809. The average score across all models is 0.631.

What is the highest Claw-Eval score?

The highest Claw-Eval score is 0.809, achieved by Kimi K2.6 from Moonshot AI.

How many models are evaluated on Claw-Eval?

7 models have been evaluated on the Claw-Eval benchmark, with 0 verified results and 7 self-reported results.

What categories does Claw-Eval cover?

Claw-Eval is categorized under agents and coding. The benchmark evaluates text models.

More evaluations to explore

Related benchmarks in the same category

View all agents
BrowseComp

BrowseComp is a benchmark comprising 1,266 questions that challenge AI agents to persistently navigate the internet in search of hard-to-find, entangled information. The benchmark measures agents' ability to exercise persistence in information gathering, demonstrate creativity in web navigation, and find concise, verifiable answers. Despite the difficulty of the questions, BrowseComp is simple and easy-to-use, as predicted answers are short and easily verifiable against reference answers.

agents
45 models
Terminal-Bench 2.0

Terminal-Bench 2.0 is an updated benchmark for testing AI agents' tool use ability to operate a computer via terminal. It evaluates how well models can handle real-world, end-to-end tasks autonomously, including compiling code, training models, setting up servers, system administration, security tasks, data science workflows, and cybersecurity vulnerabilities.

agents
39 models
Terminal-Bench

Terminal-Bench is a benchmark for testing AI agents in real terminal environments. It evaluates how well agents can handle real-world, end-to-end tasks autonomously, including compiling code, training models, setting up servers, system administration, security tasks, data science workflows, and cybersecurity vulnerabilities. The benchmark consists of a dataset of ~100 hand-crafted, human-verified tasks and an execution harness that connects language models to a terminal sandbox.

agents
23 models
t2-bench

t2-bench is a benchmark for evaluating agentic tool use capabilities, measuring how well models can select, sequence, and utilize tools to solve complex tasks. It tests autonomous planning and execution in multi-step scenarios.

agents
22 models
SWE-Bench Pro

SWE-Bench Pro is an advanced version of SWE-Bench that evaluates language models on complex, real-world software engineering tasks requiring extended reasoning and multi-step problem solving.

agents
20 models
BFCL-v3

Berkeley Function Calling Leaderboard v3 (BFCL-v3) is an advanced benchmark that evaluates large language models' function calling capabilities through multi-turn and multi-step interactions. It introduces extended conversational exchanges where models must retain contextual information across turns and execute multiple internal function calls for complex user requests. The benchmark includes 1000 test cases across domains like vehicle control, trading bots, travel booking, and file system management, using state-based evaluation to verify both system state changes and execution path correctness.

agents
18 models