HealthBench Hard

A challenging variation of HealthBench that evaluates large language models' performance and safety in healthcare through 5,000 multi-turn conversations with particularly rigorous evaluation criteria validated by 262 physicians from 60 countries

Muse Spark from Meta currently leads the HealthBench Hard leaderboard with a score of 0.428 across 5 evaluated AI models.

MetaMuse Spark leads with 42.8%, followed by OpenAIGPT OSS 120B at 30.0% and OpenAIGPT-5.3 Chat at 25.9%.

Progress Over Time

Interactive timeline showing model performance evolution on HealthBench Hard

State-of-the-art frontier
Open
Proprietary

HealthBench Hard Leaderboard

5 models
ContextCostLicense
1
2117B131K$0.09 / $0.45
3128K$1.75 / $14.00
421B131K$0.10 / $0.50
5
OpenAI
OpenAI
Notice missing or incorrect data?

FAQ

Common questions about HealthBench Hard.

What is the HealthBench Hard benchmark?

A challenging variation of HealthBench that evaluates large language models' performance and safety in healthcare through 5,000 multi-turn conversations with particularly rigorous evaluation criteria validated by 262 physicians from 60 countries

What is the HealthBench Hard leaderboard?

The HealthBench Hard leaderboard ranks 5 AI models based on their performance on this benchmark. Currently, Muse Spark by Meta leads with a score of 0.428. The average score across all models is 0.222.

What is the highest HealthBench Hard score?

The highest HealthBench Hard score is 0.428, achieved by Muse Spark from Meta.

How many models are evaluated on HealthBench Hard?

5 models have been evaluated on the HealthBench Hard benchmark, with 0 verified results and 5 self-reported results.

Where can I find the HealthBench Hard paper?

The HealthBench Hard paper is available at https://arxiv.org/abs/2505.08775. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does HealthBench Hard cover?

HealthBench Hard is categorized under healthcare. The benchmark evaluates text models.

More evaluations to explore

Related benchmarks in the same category

View all healthcare
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

healthcare
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

healthcare
99 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

healthcaremultimodal
62 models
SuperGPQA

SuperGPQA is a comprehensive benchmark that evaluates large language models across 285 graduate-level academic disciplines. The benchmark contains 25,957 questions covering 13 broad disciplinary areas including Engineering, Medicine, Science, and Law, with specialized fields in light industry, agriculture, and service-oriented domains. It employs a Human-LLM collaborative filtering mechanism with over 80 expert annotators to create challenging questions that assess graduate-level knowledge and reasoning capabilities.

healthcare
30 models
MMLU-ProX

Extended version of MMLU-Pro providing additional challenging multiple-choice questions for evaluating language models across diverse academic and professional domains. Built on the foundation of the Massive Multitask Language Understanding benchmark framework.

healthcare
29 models
VideoMMMU

Video-MMMU evaluates Large Multimodal Models' ability to acquire knowledge from expert-level professional videos across six disciplines through three cognitive stages: perception, comprehension, and adaptation. Contains 300 videos and 900 human-annotated questions spanning Art, Business, Science, Medicine, Humanities, and Engineering.

healthcaremultimodal
24 models