HealthBench Consensus

HealthBench Consensus is a HealthBench subset focused on questions where physician-created rubric criteria have especially high agreement, measuring healthcare performance and safety on consensus-evaluable conversations.

GPT-5.5 Instant from OpenAI currently leads the HealthBench Consensus leaderboard with a score of 0.947 across 1 evaluated AI models.

Paper

OpenAIGPT-5.5 Instant leads with 94.7%.

Progress Over Time

Interactive timeline showing model performance evolution on HealthBench Consensus

State-of-the-art frontier
Open
Proprietary

HealthBench Consensus Leaderboard

1 models
ContextCostLicense
1400K$5.00 / $30.00
Notice missing or incorrect data?

FAQ

Common questions about HealthBench Consensus.

What is the HealthBench Consensus benchmark?

HealthBench Consensus is a HealthBench subset focused on questions where physician-created rubric criteria have especially high agreement, measuring healthcare performance and safety on consensus-evaluable conversations.

What is the HealthBench Consensus leaderboard?

The HealthBench Consensus leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, GPT-5.5 Instant by OpenAI leads with a score of 0.947. The average score across all models is 0.947.

What is the highest HealthBench Consensus score?

The highest HealthBench Consensus score is 0.947, achieved by GPT-5.5 Instant from OpenAI.

How many models are evaluated on HealthBench Consensus?

1 models have been evaluated on the HealthBench Consensus benchmark, with 0 verified results and 1 self-reported results.

Where can I find the HealthBench Consensus paper?

The HealthBench Consensus paper is available at https://arxiv.org/abs/2505.08775. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does HealthBench Consensus cover?

HealthBench Consensus is categorized under healthcare. The benchmark evaluates text models.

More evaluations to explore

Related benchmarks in the same category

View all healthcare
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

healthcare
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

healthcare
99 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

healthcaremultimodal
62 models
SuperGPQA

SuperGPQA is a comprehensive benchmark that evaluates large language models across 285 graduate-level academic disciplines. The benchmark contains 25,957 questions covering 13 broad disciplinary areas including Engineering, Medicine, Science, and Law, with specialized fields in light industry, agriculture, and service-oriented domains. It employs a Human-LLM collaborative filtering mechanism with over 80 expert annotators to create challenging questions that assess graduate-level knowledge and reasoning capabilities.

healthcare
30 models
MMLU-ProX

Extended version of MMLU-Pro providing additional challenging multiple-choice questions for evaluating language models across diverse academic and professional domains. Built on the foundation of the Massive Multitask Language Understanding benchmark framework.

healthcare
29 models
VideoMMMU

Video-MMMU evaluates Large Multimodal Models' ability to acquire knowledge from expert-level professional videos across six disciplines through three cognitive stages: perception, comprehension, and adaptation. Contains 300 videos and 900 human-annotated questions spanning Art, Business, Science, Medicine, Humanities, and Engineering.

healthcaremultimodal
24 models