LVBench
LVBench is an extreme long video understanding benchmark designed to evaluate multimodal models on videos up to two hours in duration. It contains 6 major categories and 21 subcategories, with videos averaging five times longer than existing datasets. The benchmark addresses applications requiring comprehension of extremely long videos.
Kimi K2.5 from Moonshot AI currently leads the LVBench leaderboard with a score of 0.759 across 20 evaluated AI models.
Kimi K2.5 leads with 75.9%, followed by
Qwen3.5-122B-A10B at 74.4% and
Qwen3.5-27B at 73.6%.
Progress Over Time
Interactive timeline showing model performance evolution on LVBench
LVBench Leaderboard
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Moonshot AI | 1.0T | 262K | $0.60 / $3.00 | ||
| 2 | Alibaba Cloud / Qwen Team | 122B | 262K | $0.40 / $3.20 | ||
| 3 | Alibaba Cloud / Qwen Team | 27B | 262K | $0.30 / $2.40 | ||
| 4 | Alibaba Cloud / Qwen Team | 35B | — | — | ||
| 4 | Alibaba Cloud / Qwen Team | 35B | 262K | $0.25 / $2.00 | ||
| 6 | Alibaba Cloud / Qwen Team | 236B | 262K | $0.30 / $1.49 | ||
| 7 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 8 | Alibaba Cloud / Qwen Team | 236B | 262K | $0.45 / $3.49 | ||
| 9 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 10 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $0.70 | ||
| 11 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $1.00 | ||
| 12 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.08 / $0.50 | ||
| 13 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $0.60 | ||
| 14 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.18 / $2.09 | ||
| 15 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $1.00 | ||
| 16 | Alibaba Cloud / Qwen Team | 34B | — | — | ||
| 17 | Alibaba Cloud / Qwen Team | 72B | — | — | ||
| 18 | Alibaba Cloud / Qwen Team | 8B | — | — | ||
| 19 | Amazon | — | 300K | $0.80 / $3.20 | ||
| 20 | Amazon | — | 300K | $0.06 / $0.24 |
FAQ
Common questions about LVBench.
More evaluations to explore
Related benchmarks in the same category
Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions
MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.
A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.
MathVista evaluates mathematical reasoning of foundation models in visual contexts. It consists of 6,141 examples derived from 28 existing multimodal datasets and 3 newly created datasets (IQTest, FunctionQA, and PaperQA), combining challenges from diverse mathematical and visual tasks to assess models' ability to understand complex figures and perform rigorous reasoning.
CharXiv-R is the reasoning component of the CharXiv benchmark, focusing on complex reasoning questions that require synthesizing information across visual chart elements. It evaluates multimodal large language models on their ability to understand and reason about scientific charts from arXiv papers through various reasoning tasks.