CheXpert CXR

CheXpert is a large dataset of 224,316 chest radiographs from 65,240 patients for automated chest X-ray interpretation. The dataset includes uncertainty labels for 14 medical observations extracted from radiology reports. It serves as a benchmark for developing and evaluating automated chest radiograph interpretation models.

MedGemma 4B IT from Google currently leads the CheXpert CXR leaderboard with a score of 0.481 across 1 evaluated AI models.

Paper

GoogleMedGemma 4B IT leads with 48.1%.

Progress Over Time

Interactive timeline showing model performance evolution on CheXpert CXR

State-of-the-art frontier
Open
Proprietary

CheXpert CXR Leaderboard

1 models
ContextCostLicense
14B
Notice missing or incorrect data?

FAQ

Common questions about CheXpert CXR.

What is the CheXpert CXR benchmark?

CheXpert is a large dataset of 224,316 chest radiographs from 65,240 patients for automated chest X-ray interpretation. The dataset includes uncertainty labels for 14 medical observations extracted from radiology reports. It serves as a benchmark for developing and evaluating automated chest radiograph interpretation models.

What is the CheXpert CXR leaderboard?

The CheXpert CXR leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, MedGemma 4B IT by Google leads with a score of 0.481. The average score across all models is 0.481.

What is the highest CheXpert CXR score?

The highest CheXpert CXR score is 0.481, achieved by MedGemma 4B IT from Google.

How many models are evaluated on CheXpert CXR?

1 models have been evaluated on the CheXpert CXR benchmark, with 0 verified results and 1 self-reported results.

Where can I find the CheXpert CXR paper?

The CheXpert CXR paper is available at https://arxiv.org/abs/1901.07031. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does CheXpert CXR cover?

CheXpert CXR is categorized under healthcare and vision. The benchmark evaluates image models.

More evaluations to explore

Related benchmarks in the same category

View all healthcare
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

healthcare
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

healthcare
99 models
Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

visionmultimodal
74 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

healthcaremultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

visionmultimodal
47 models
MathVista

MathVista evaluates mathematical reasoning of foundation models in visual contexts. It consists of 6,141 examples derived from 28 existing multimodal datasets and 3 newly created datasets (IQTest, FunctionQA, and PaperQA), combining challenges from diverse mathematical and visual tasks to assess models' ability to understand complex figures and perform rigorous reasoning.

visionmultimodal
36 models