TAU-bench Retail
A benchmark for evaluating tool-agent-user interaction in retail environments. Tests language agents' ability to handle dynamic conversations with users while using domain-specific API tools and following policy guidelines. Evaluates agents on tasks like order cancellations, address changes, and order status checks through multi-turn conversations.
Progress Over Time
Interactive timeline showing model performance evolution on TAU-bench Retail
State-of-the-art frontier
Open
Proprietary
TAU-bench Retail Leaderboard
25 models
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 2 | Anthropic | — | 200K | $15.00 / $75.00 | ||
| 3 | Anthropic | — | 200K | $15.00 / $75.00 | ||
| 4 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 5 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 6 | Zhipu AI | 355B | 131K | $0.40 / $1.60 | ||
| 7 | Zhipu AI | 106B | — | — | ||
| 8 | Alibaba Cloud / Qwen Team | 480B | — | — | ||
| 9 | OpenAI | — | 200K | $1.10 / $4.40 | ||
| 10 | OpenAI | — | 200K | $15.00 / $60.00 | ||
| 11 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 12 | Anthropic | — | 200K | $3.00 / $15.00 | ||
| 13 | OpenAI | — | 128K | $75.00 / $150.00 | ||
| 14 | OpenAI | — | 1.0M | $2.00 / $8.00 | ||
| 15 | Alibaba Cloud / Qwen Team | 235B | 262K | $0.30 / $3.00 | ||
| 15 | OpenAI | 117B | 131K | $0.09 / $0.45 | ||
| 15 | MiniMax | 456B | — | — | ||
| 18 | MiniMax | 456B | 1.0M | $0.55 / $2.20 | ||
| 19 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 20 | OpenAI | — | 128K | $2.50 / $10.00 | ||
| 21 | OpenAI | — | 200K | $1.10 / $4.40 | ||
| 22 | OpenAI | — | 1.0M | $0.40 / $1.60 | ||
| 23 | OpenAI | 21B | 131K | $0.05 / $0.20 | ||
| 24 | Anthropic | — | 200K | $0.80 / $4.00 | ||
| 25 | OpenAI | — | 1.0M | $0.10 / $0.40 |
Notice missing or incorrect data?
FAQ
Common questions about TAU-bench Retail
A benchmark for evaluating tool-agent-user interaction in retail environments. Tests language agents' ability to handle dynamic conversations with users while using domain-specific API tools and following policy guidelines. Evaluates agents on tasks like order cancellations, address changes, and order status checks through multi-turn conversations.
The TAU-bench Retail paper is available at https://arxiv.org/abs/2406.12045. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The TAU-bench Retail leaderboard ranks 25 AI models based on their performance on this benchmark. Currently, Claude Sonnet 4.5 by Anthropic leads with a score of 0.862. The average score across all models is 0.678.
The highest TAU-bench Retail score is 0.862, achieved by Claude Sonnet 4.5 from Anthropic.
25 models have been evaluated on the TAU-bench Retail benchmark, with 0 verified results and 25 self-reported results.
TAU-bench Retail is categorized under communication, reasoning, and tool calling. The benchmark evaluates text models.