BioLP-Bench

BioLP-Bench is a model-graded evaluation measuring ability to find and correct mistakes in common biological laboratory protocols. It evaluates dual-use biological knowledge relevant to bioweapons development.

Grok-4.1 Thinking from xAI currently leads the BioLP-Bench leaderboard with a score of 0.370 across 1 evaluated AI models.

Paper

xAIGrok-4.1 Thinking leads with 37.0%.

Progress Over Time

Interactive timeline showing model performance evolution on BioLP-Bench

State-of-the-art frontier
Open
Proprietary

BioLP-Bench Leaderboard

1 models
ContextCostLicense
1256K$3.00 / $15.00
Notice missing or incorrect data?

FAQ

Common questions about BioLP-Bench.

What is the BioLP-Bench benchmark?

BioLP-Bench is a model-graded evaluation measuring ability to find and correct mistakes in common biological laboratory protocols. It evaluates dual-use biological knowledge relevant to bioweapons development.

What is the BioLP-Bench leaderboard?

The BioLP-Bench leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Grok-4.1 Thinking by xAI leads with a score of 0.370. The average score across all models is 0.370.

What is the highest BioLP-Bench score?

The highest BioLP-Bench score is 0.370, achieved by Grok-4.1 Thinking from xAI.

How many models are evaluated on BioLP-Bench?

1 models have been evaluated on the BioLP-Bench benchmark, with 0 verified results and 1 self-reported results.

Where can I find the BioLP-Bench paper?

The BioLP-Bench paper is available at https://www.biorxiv.org/content/10.1101/2024.08.15.608123v1. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does BioLP-Bench cover?

BioLP-Bench is categorized under biology, healthcare, and safety. The benchmark evaluates text models.

More evaluations to explore

Related benchmarks in the same category

View all biology
GPQA

A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.

biology
213 models
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

healthcare
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

healthcare
99 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

healthcaremultimodal
62 models
SuperGPQA

SuperGPQA is a comprehensive benchmark that evaluates large language models across 285 graduate-level academic disciplines. The benchmark contains 25,957 questions covering 13 broad disciplinary areas including Engineering, Medicine, Science, and Law, with specialized fields in light industry, agriculture, and service-oriented domains. It employs a Human-LLM collaborative filtering mechanism with over 80 expert annotators to create challenging questions that assess graduate-level knowledge and reasoning capabilities.

healthcare
30 models
MMLU-ProX

Extended version of MMLU-Pro providing additional challenging multiple-choice questions for evaluating language models across diverse academic and professional domains. Built on the foundation of the Massive Multitask Language Understanding benchmark framework.

healthcare
29 models