LVBench

LVBench is an extreme long video understanding benchmark designed to evaluate multimodal models on videos up to two hours in duration. It contains 6 major categories and 21 subcategories, with videos averaging five times longer than existing datasets. The benchmark addresses applications requiring comprehension of extremely long videos.

Kimi K2.5 from Moonshot AI currently leads the LVBench leaderboard with a score of 0.759 across 20 evaluated AI models.

Paper

Moonshot AIKimi K2.5 leads with 75.9%, followed by Alibaba Cloud / Qwen TeamQwen3.5-122B-A10B at 74.4% and Alibaba Cloud / Qwen TeamQwen3.5-27B at 73.6%.

Progress Over Time

Interactive timeline showing model performance evolution on LVBench

State-of-the-art frontier
Open
Proprietary

LVBench Leaderboard

20 models
ContextCostLicense
1
Moonshot AI
Moonshot AI
1.0T262K$0.60 / $3.00
2
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
122B262K$0.40 / $3.20
3
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
27B262K$0.30 / $2.40
4
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B
4
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B262K$0.25 / $2.00
6
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
236B262K$0.30 / $1.49
7
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
33B
8
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
236B262K$0.45 / $3.49
9
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
33B
10
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
31B262K$0.20 / $0.70
11
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
31B262K$0.20 / $1.00
12
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
9B262K$0.08 / $0.50
13
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
4B262K$0.10 / $0.60
14
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
9B262K$0.18 / $2.09
15
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
4B262K$0.10 / $1.00
16
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
34B
17
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
72B
18
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
8B
19
Amazon
Amazon
300K$0.80 / $3.20
20
Amazon
Amazon
300K$0.06 / $0.24
Notice missing or incorrect data?

FAQ

Common questions about LVBench.

What is the LVBench benchmark?

LVBench is an extreme long video understanding benchmark designed to evaluate multimodal models on videos up to two hours in duration. It contains 6 major categories and 21 subcategories, with videos averaging five times longer than existing datasets. The benchmark addresses applications requiring comprehension of extremely long videos.

What is the LVBench leaderboard?

The LVBench leaderboard ranks 20 AI models based on their performance on this benchmark. Currently, Kimi K2.5 by Moonshot AI leads with a score of 0.759. The average score across all models is 0.597.

What is the highest LVBench score?

The highest LVBench score is 0.759, achieved by Kimi K2.5 from Moonshot AI.

How many models are evaluated on LVBench?

20 models have been evaluated on the LVBench benchmark, with 0 verified results and 20 self-reported results.

Where can I find the LVBench paper?

The LVBench paper is available at https://arxiv.org/abs/2406.08035. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does LVBench cover?

LVBench is categorized under long context, multimodal, and vision. The benchmark evaluates multimodal models.

More evaluations to explore

Related benchmarks in the same category

View all long context
Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

visionmultimodal
74 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

multimodalmultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

multimodalmultimodal
47 models
nolima
long context
44 models
MathVista

MathVista evaluates mathematical reasoning of foundation models in visual contexts. It consists of 6,141 examples derived from 28 existing multimodal datasets and 3 newly created datasets (IQTest, FunctionQA, and PaperQA), combining challenges from diverse mathematical and visual tasks to assess models' ability to understand complex figures and perform rigorous reasoning.

multimodalmultimodal
36 models
CharXiv-R

CharXiv-R is the reasoning component of the CharXiv benchmark, focusing on complex reasoning questions that require synthesizing information across visual chart elements. It evaluates multimodal large language models on their ability to understand and reason about scientific charts from arXiv papers through various reasoning tasks.

multimodalmultimodal
34 models