Benchmarks/audio/MMAU Music

MMAU Music

A subset of the MMAU benchmark focused specifically on music understanding and reasoning tasks. Part of a comprehensive multimodal audio understanding benchmark that evaluates models on expert-level knowledge and complex reasoning across music audio clips.

Paper

Progress Over Time

Interactive timeline showing model performance evolution on MMAU Music

State-of-the-art frontier
Open
Proprietary

MMAU Music Leaderboard

1 models
ContextCostLicense
1
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
7B
Notice missing or incorrect data?

FAQ

Common questions about MMAU Music

A subset of the MMAU benchmark focused specifically on music understanding and reasoning tasks. Part of a comprehensive multimodal audio understanding benchmark that evaluates models on expert-level knowledge and complex reasoning across music audio clips.
The MMAU Music paper is available at https://arxiv.org/abs/2410.19168. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The MMAU Music leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Qwen2.5-Omni-7B by Alibaba Cloud / Qwen Team leads with a score of 0.692. The average score across all models is 0.692.
The highest MMAU Music score is 0.692, achieved by Qwen2.5-Omni-7B from Alibaba Cloud / Qwen Team.
1 models have been evaluated on the MMAU Music benchmark, with 0 verified results and 1 self-reported results.
MMAU Music is categorized under audio, multimodal, and reasoning. The benchmark evaluates multimodal models.