MusicCaps

MusicCaps is a dataset composed of 5,521 music examples, each labeled with an English aspect list and a free text caption written by musicians. The dataset contains 10-second music clips from AudioSet paired with rich textual descriptions that capture sonic qualities and musical elements like genre, mood, tempo, instrumentation, and rhythm. Created to support research in music-text understanding and generation tasks.

Qwen2.5-Omni-7B from Alibaba Cloud / Qwen Team currently leads the MusicCaps leaderboard with a score of 0.328 across 1 evaluated AI models.

Paper

Alibaba Cloud / Qwen TeamQwen2.5-Omni-7B leads with 32.8%.

Progress Over Time

Interactive timeline showing model performance evolution on MusicCaps

State-of-the-art frontier
Open
Proprietary

MusicCaps Leaderboard

1 models
ContextCostLicense
1
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
7B
Notice missing or incorrect data?

FAQ

Common questions about MusicCaps.

What is the MusicCaps benchmark?

MusicCaps is a dataset composed of 5,521 music examples, each labeled with an English aspect list and a free text caption written by musicians. The dataset contains 10-second music clips from AudioSet paired with rich textual descriptions that capture sonic qualities and musical elements like genre, mood, tempo, instrumentation, and rhythm. Created to support research in music-text understanding and generation tasks.

What is the MusicCaps leaderboard?

The MusicCaps leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Qwen2.5-Omni-7B by Alibaba Cloud / Qwen Team leads with a score of 0.328. The average score across all models is 0.328.

What is the highest MusicCaps score?

The highest MusicCaps score is 0.328, achieved by Qwen2.5-Omni-7B from Alibaba Cloud / Qwen Team.

How many models are evaluated on MusicCaps?

1 models have been evaluated on the MusicCaps benchmark, with 0 verified results and 1 self-reported results.

Where can I find the MusicCaps paper?

The MusicCaps paper is available at https://arxiv.org/abs/2301.11325. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does MusicCaps cover?

MusicCaps is categorized under audio and multimodal. The benchmark evaluates multimodal models.

More evaluations to explore

Related benchmarks in the same category

View all audio
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

multimodalmultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

multimodalmultimodal
47 models
MathVista

MathVista evaluates mathematical reasoning of foundation models in visual contexts. It consists of 6,141 examples derived from 28 existing multimodal datasets and 3 newly created datasets (IQTest, FunctionQA, and PaperQA), combining challenges from diverse mathematical and visual tasks to assess models' ability to understand complex figures and perform rigorous reasoning.

multimodalmultimodal
36 models
CharXiv-R

CharXiv-R is the reasoning component of the CharXiv benchmark, focusing on complex reasoning questions that require synthesizing information across visual chart elements. It evaluates multimodal large language models on their ability to understand and reason about scientific charts from arXiv papers through various reasoning tasks.

multimodalmultimodal
34 models
AI2D

AI2D is a dataset of 4,903 illustrative diagrams from grade school natural sciences (such as food webs, human physiology, and life cycles) with over 15,000 multiple choice questions and answers. The benchmark evaluates diagram understanding and visual reasoning capabilities, requiring models to interpret diagrammatic elements, relationships, and structure to answer questions about scientific concepts represented in visual form.

multimodalmultimodal
32 models
DocVQA

A dataset for Visual Question Answering on document images containing 50,000 questions defined on 12,000+ document images. The benchmark tests AI's ability to understand document structure and content, requiring models to comprehend document layout and perform information retrieval to answer questions about document images.

multimodalmultimodal
26 models