LiveBench 20241125
LiveBench is a challenging, contamination-limited LLM benchmark that addresses test set contamination by releasing new questions monthly based on recently-released datasets, arXiv papers, news articles, and IMDb movie synopses. It comprises tasks across math, coding, reasoning, language, instruction following, and data analysis with verifiable, objective ground-truth answers.
Progress Over Time
Interactive timeline showing model performance evolution on LiveBench 20241125
State-of-the-art frontier
Open
Proprietary
LiveBench 20241125 Leaderboard
14 models
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Alibaba Cloud / Qwen Team | 236B | 262K | $0.45 / $3.49 | ||
| 2 | Alibaba Cloud / Qwen Team | 235B | 262K | $0.30 / $3.00 | ||
| 3 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 4 | Alibaba Cloud / Qwen Team | 80B | 66K | $0.15 / $1.50 | ||
| 5 | Alibaba Cloud / Qwen Team | 235B | 262K | $0.15 / $0.80 | ||
| 6 | Alibaba Cloud / Qwen Team | 236B | 262K | $0.30 / $1.50 | ||
| 7 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 8 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 9 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $1.00 | ||
| 10 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.18 / $2.09 | ||
| 11 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $1.00 | ||
| 12 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $0.70 | ||
| 13 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.08 / $0.50 | ||
| 14 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $0.60 |
Notice missing or incorrect data?
FAQ
Common questions about LiveBench 20241125
LiveBench is a challenging, contamination-limited LLM benchmark that addresses test set contamination by releasing new questions monthly based on recently-released datasets, arXiv papers, news articles, and IMDb movie synopses. It comprises tasks across math, coding, reasoning, language, instruction following, and data analysis with verifiable, objective ground-truth answers.
The LiveBench 20241125 paper is available at https://arxiv.org/abs/2406.19314. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The LiveBench 20241125 leaderboard ranks 14 AI models based on their performance on this benchmark. Currently, Qwen3 VL 235B A22B Thinking by Alibaba Cloud / Qwen Team leads with a score of 0.796. The average score across all models is 0.719.
The highest LiveBench 20241125 score is 0.796, achieved by Qwen3 VL 235B A22B Thinking from Alibaba Cloud / Qwen Team.
14 models have been evaluated on the LiveBench 20241125 benchmark, with 0 verified results and 14 self-reported results.
LiveBench 20241125 is categorized under general, math, and reasoning. The benchmark evaluates text models.