VQA-Rad

VQA-RAD (Visual Question Answering in Radiology) is the first manually constructed dataset of medical visual question answering containing 3,515 clinically generated visual questions and answers about radiology images. The dataset includes questions created by clinical trainees on 315 radiology images from MedPix covering head, chest, and abdominal scans, designed to support AI development for medical image analysis and improve patient care.

MedGemma 4B IT from Google currently leads the VQA-Rad leaderboard with a score of 0.499 across 1 evaluated AI models.

Paper

GoogleMedGemma 4B IT leads with 49.9%.

Progress Over Time

Interactive timeline showing model performance evolution on VQA-Rad

State-of-the-art frontier
Open
Proprietary

VQA-Rad Leaderboard

1 models
ContextCostLicense
14B
Notice missing or incorrect data?

FAQ

Common questions about VQA-Rad.

What is the VQA-Rad benchmark?

VQA-RAD (Visual Question Answering in Radiology) is the first manually constructed dataset of medical visual question answering containing 3,515 clinically generated visual questions and answers about radiology images. The dataset includes questions created by clinical trainees on 315 radiology images from MedPix covering head, chest, and abdominal scans, designed to support AI development for medical image analysis and improve patient care.

What is the VQA-Rad leaderboard?

The VQA-Rad leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, MedGemma 4B IT by Google leads with a score of 0.499. The average score across all models is 0.499.

What is the highest VQA-Rad score?

The highest VQA-Rad score is 0.499, achieved by MedGemma 4B IT from Google.

How many models are evaluated on VQA-Rad?

1 models have been evaluated on the VQA-Rad benchmark, with 0 verified results and 1 self-reported results.

Where can I find the VQA-Rad paper?

The VQA-Rad paper is available at https://doi.org/10.1038/sdata.2018.251. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does VQA-Rad cover?

VQA-Rad is categorized under healthcare, image to text, multimodal, and vision. The benchmark evaluates multimodal models.

More evaluations to explore

Related benchmarks in the same category

View all healthcare
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

healthcare
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

healthcare
99 models
Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

visionmultimodal
74 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

healthcaremultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

multimodalmultimodal
47 models
MathVista

MathVista evaluates mathematical reasoning of foundation models in visual contexts. It consists of 6,141 examples derived from 28 existing multimodal datasets and 3 newly created datasets (IQTest, FunctionQA, and PaperQA), combining challenges from diverse mathematical and visual tasks to assess models' ability to understand complex figures and perform rigorous reasoning.

multimodalmultimodal
36 models