Virology Capabilities Test

Virology Capabilities Test (VCT) is an expert-level multiple-choice benchmark measuring the capability to troubleshoot complex virology laboratory protocols. It evaluates dual-use biological knowledge relevant to bioweapons development.

Grok-4.1 Thinking from xAI currently leads the Virology Capabilities Test leaderboard with a score of 0.610 across 1 evaluated AI models.

Paper

xAIGrok-4.1 Thinking leads with 61.0%.

Progress Over Time

Interactive timeline showing model performance evolution on Virology Capabilities Test

State-of-the-art frontier
Open
Proprietary

Virology Capabilities Test Leaderboard

1 models
ContextCostLicense
1256K$3.00 / $15.00
Notice missing or incorrect data?

FAQ

Common questions about Virology Capabilities Test.

What is the Virology Capabilities Test benchmark?

Virology Capabilities Test (VCT) is an expert-level multiple-choice benchmark measuring the capability to troubleshoot complex virology laboratory protocols. It evaluates dual-use biological knowledge relevant to bioweapons development.

What is the Virology Capabilities Test leaderboard?

The Virology Capabilities Test leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Grok-4.1 Thinking by xAI leads with a score of 0.610. The average score across all models is 0.610.

What is the highest Virology Capabilities Test score?

The highest Virology Capabilities Test score is 0.610, achieved by Grok-4.1 Thinking from xAI.

How many models are evaluated on Virology Capabilities Test?

1 models have been evaluated on the Virology Capabilities Test benchmark, with 0 verified results and 1 self-reported results.

Where can I find the Virology Capabilities Test paper?

The Virology Capabilities Test paper is available at https://arxiv.org/abs/2504.16137. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does Virology Capabilities Test cover?

Virology Capabilities Test is categorized under healthcare and safety. The benchmark evaluates text models.

More evaluations to explore

Related benchmarks in the same category

View all healthcare
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

healthcare
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

healthcare
99 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

healthcaremultimodal
62 models
SuperGPQA

SuperGPQA is a comprehensive benchmark that evaluates large language models across 285 graduate-level academic disciplines. The benchmark contains 25,957 questions covering 13 broad disciplinary areas including Engineering, Medicine, Science, and Law, with specialized fields in light industry, agriculture, and service-oriented domains. It employs a Human-LLM collaborative filtering mechanism with over 80 expert annotators to create challenging questions that assess graduate-level knowledge and reasoning capabilities.

healthcare
30 models
MMLU-ProX

Extended version of MMLU-Pro providing additional challenging multiple-choice questions for evaluating language models across diverse academic and professional domains. Built on the foundation of the Massive Multitask Language Understanding benchmark framework.

healthcare
29 models
VideoMMMU

Video-MMMU evaluates Large Multimodal Models' ability to acquire knowledge from expert-level professional videos across six disciplines through three cognitive stages: perception, comprehension, and adaptation. Contains 300 videos and 900 human-annotated questions spanning Art, Business, Science, Medicine, Humanities, and Engineering.

healthcaremultimodal
24 models