Arena-Hard v2
Arena-Hard-Auto v2 is a challenging benchmark consisting of 500 carefully curated prompts sourced from Chatbot Arena and WildChat-1M, designed to evaluate large language models on real-world user queries. The benchmark covers diverse domains including open-ended software engineering problems, mathematics, creative writing, and technical problem-solving. It uses LLM-as-a-Judge for automatic evaluation, achieving 98.6% correlation with human preference rankings while providing 3x higher separation of model performances compared to MT-Bench. The benchmark emphasizes prompt specificity, complexity, and domain knowledge to better distinguish between model capabilities.
Progress Over Time
Interactive timeline showing model performance evolution on Arena-Hard v2
Arena-Hard v2 Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Xiaomi | 309B | 256K | $0.10 / $0.30 | ||
| 2 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 3 | Alibaba Cloud / Qwen Team | 235B | 262K | $0.30 / $3.00 | ||
| 4 | Alibaba Cloud / Qwen Team | 235B | 262K | $0.15 / $0.80 | ||
| 5 | Alibaba Cloud / Qwen Team | 236B | 262K | $0.30 / $1.49 | ||
| 6 | 120B | — | — | |||
| 7 | Sarvam AI | 105B | — | — | ||
| 8 | 32B | 262K | $0.06 / $0.24 | |||
| 9 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 10 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 11 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 12 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $0.70 | ||
| 13 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $1.00 | ||
| 14 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.18 / $2.09 | ||
| 15 | Sarvam AI | 30B | — | — | ||
| 16 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $1.00 |
FAQ
Common questions about Arena-Hard v2