CharXiv-D
CharXiv-D is the descriptive questions subset of the CharXiv benchmark, designed to assess multimodal large language models' ability to extract basic information from scientific charts. It contains descriptive questions covering information extraction, enumeration, pattern recognition, and counting across 2,323 diverse charts from arXiv papers, all curated and verified by human experts.
Progress Over Time
Interactive timeline showing model performance evolution on CharXiv-D
State-of-the-art frontier
Open
Proprietary
CharXiv-D Leaderboard
13 models
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
| 1 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 2 | Alibaba Cloud / Qwen Team | 33B | — | — | ||
| 3 | OpenAI | — | 128K | $75.00 / $150.00 | ||
| 4 | OpenAI | — | 1.0M | $0.40 / $1.60 | ||
| 5 | OpenAI | — | 1.0M | $2.00 / $8.00 | ||
| 6 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $1.00 | ||
| 7 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.18 / $2.09 | ||
| 8 | Alibaba Cloud / Qwen Team | 31B | 262K | $0.20 / $0.70 | ||
| 9 | OpenAI | — | 128K | $2.50 / $10.00 | ||
| 10 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $1.00 | ||
| 11 | Alibaba Cloud / Qwen Team | 9B | 262K | $0.08 / $0.50 | ||
| 12 | Alibaba Cloud / Qwen Team | 4B | 262K | $0.10 / $0.60 | ||
| 13 | OpenAI | — | 1.0M | $0.10 / $0.40 |
Notice missing or incorrect data?
FAQ
Common questions about CharXiv-D
CharXiv-D is the descriptive questions subset of the CharXiv benchmark, designed to assess multimodal large language models' ability to extract basic information from scientific charts. It contains descriptive questions covering information extraction, enumeration, pattern recognition, and counting across 2,323 diverse charts from arXiv papers, all curated and verified by human experts.
The CharXiv-D paper is available at https://arxiv.org/abs/2406.18521. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The CharXiv-D leaderboard ranks 13 AI models based on their performance on this benchmark. Currently, Qwen3 VL 32B Instruct by Alibaba Cloud / Qwen Team leads with a score of 0.905. The average score across all models is 0.852.
The highest CharXiv-D score is 0.905, achieved by Qwen3 VL 32B Instruct from Alibaba Cloud / Qwen Team.
13 models have been evaluated on the CharXiv-D benchmark, with 0 verified results and 13 self-reported results.
CharXiv-D is categorized under structured output, vision, multimodal, and reasoning. The benchmark evaluates multimodal models.