XSTest

XSTest is a test suite designed to identify exaggerated safety behaviours in large language models. It comprises 450 prompts: 250 safe prompts across ten prompt types that well-calibrated models should not refuse to comply with, and 200 unsafe prompts as contrasts that models should refuse. The benchmark systematically evaluates whether models refuse to respond to clearly safe prompts due to overly cautious safety mechanisms.

Gemini 1.5 Pro from Google currently leads the XSTest leaderboard with a score of 0.988 across 3 evaluated AI models.

Paper

GoogleGemini 1.5 Pro leads with 98.8%, followed by GoogleGemini 1.5 Flash at 97.0% and GoogleGemini 1.5 Flash 8B at 92.6%.

Progress Over Time

Interactive timeline showing model performance evolution on XSTest

State-of-the-art frontier
Open
Proprietary

XSTest Leaderboard

3 models
ContextCostLicense
12.1M$2.50 / $10.00
21.0M$0.15 / $0.60
38B1.0M$0.07 / $0.30
Notice missing or incorrect data?

FAQ

Common questions about XSTest.

What is the XSTest benchmark?

XSTest is a test suite designed to identify exaggerated safety behaviours in large language models. It comprises 450 prompts: 250 safe prompts across ten prompt types that well-calibrated models should not refuse to comply with, and 200 unsafe prompts as contrasts that models should refuse. The benchmark systematically evaluates whether models refuse to respond to clearly safe prompts due to overly cautious safety mechanisms.

What is the XSTest leaderboard?

The XSTest leaderboard ranks 3 AI models based on their performance on this benchmark. Currently, Gemini 1.5 Pro by Google leads with a score of 0.988. The average score across all models is 0.961.

What is the highest XSTest score?

The highest XSTest score is 0.988, achieved by Gemini 1.5 Pro from Google.

How many models are evaluated on XSTest?

3 models have been evaluated on the XSTest benchmark, with 0 verified results and 3 self-reported results.

Where can I find the XSTest paper?

The XSTest paper is available at https://arxiv.org/abs/2308.01263. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does XSTest cover?

XSTest is categorized under safety. The benchmark evaluates text models.

More evaluations to explore

Related benchmarks in the same category

View all safety
CyberGym

CyberGym is a benchmark for evaluating AI agents on cybersecurity tasks, testing their ability to identify vulnerabilities, perform security analysis, and complete security-related challenges in a controlled environment.

safety
6 models
AttaQ

AttaQ is a unique dataset containing adversarial examples in the form of questions designed to provoke harmful or inappropriate responses from large language models. The benchmark evaluates safety vulnerabilities by using specialized clustering techniques that analyze both the semantic similarity of input attacks and the harmfulness of model responses, facilitating targeted improvements to model safety mechanisms.

safety
3 models
Cybersecurity CTFs

Cybersecurity Capture the Flag (CTF) benchmark for evaluating LLMs in offensive security challenges. Contains diverse cybersecurity tasks including cryptography, web exploitation, binary analysis, and forensics to assess AI capabilities in cybersecurity problem-solving.

safety
3 models
FigQA

FigQA is a multiple-choice benchmark on interpreting scientific figures from biology papers. It evaluates dual-use biological knowledge and multimodal reasoning relevant to bioweapons development.

safetymultimodal
3 models
CyBench

CyBench is a suite of Capture-the-Flag (CTF) challenges measuring agentic cyber attack capabilities. It evaluates dual-use cybersecurity knowledge and measures the 'unguided success rate', where agents complete tasks end-to-end without guidance on appropriate subtasks.

safety
2 models
POPE

Polling-based Object Probing Evaluation (POPE) is a benchmark for evaluating object hallucination in Large Vision-Language Models (LVLMs). POPE addresses the problem where LVLMs generate objects inconsistent with target images by using a polling-based query method that asks yes/no questions about object presence in images, providing more stable and flexible evaluation of object hallucination.

safetymultimodal
2 models