SQuALITY

SQuALITY (Summarization-format QUestion Answering with Long Input Texts, Yes!) is a long-document summarization dataset built by hiring highly-qualified contractors to read public-domain short stories (3000-6000 words) and write original summaries from scratch. Each document has five summaries: one overview and four question-focused summaries. Designed to address limitations in existing summarization datasets by providing high-quality, faithful summaries.

Phi-3.5-mini-instruct from Microsoft currently leads the SQuALITY leaderboard with a score of 0.243 across 5 evaluated AI models.

Paper

MicrosoftPhi-3.5-mini-instruct leads with 24.3%, followed by MicrosoftPhi-3.5-MoE-instruct at 24.1% and AmazonNova Pro at 19.8%.

Progress Over Time

Interactive timeline showing model performance evolution on SQuALITY

State-of-the-art frontier
Open
Proprietary

SQuALITY Leaderboard

5 models
ContextCostLicense
14B128K$0.10 / $0.10
260B
3
Amazon
Amazon
300K$0.80 / $3.20
4
Amazon
Amazon
300K$0.06 / $0.24
5128K$0.03 / $0.14
Notice missing or incorrect data?

FAQ

Common questions about SQuALITY.

What is the SQuALITY benchmark?

SQuALITY (Summarization-format QUestion Answering with Long Input Texts, Yes!) is a long-document summarization dataset built by hiring highly-qualified contractors to read public-domain short stories (3000-6000 words) and write original summaries from scratch. Each document has five summaries: one overview and four question-focused summaries. Designed to address limitations in existing summarization datasets by providing high-quality, faithful summaries.

What is the SQuALITY leaderboard?

The SQuALITY leaderboard ranks 5 AI models based on their performance on this benchmark. Currently, Phi-3.5-mini-instruct by Microsoft leads with a score of 0.243. The average score across all models is 0.212.

What is the highest SQuALITY score?

The highest SQuALITY score is 0.243, achieved by Phi-3.5-mini-instruct from Microsoft.

How many models are evaluated on SQuALITY?

5 models have been evaluated on the SQuALITY benchmark, with 0 verified results and 5 self-reported results.

Where can I find the SQuALITY paper?

The SQuALITY paper is available at https://arxiv.org/abs/2205.11465. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does SQuALITY cover?

SQuALITY is categorized under language, long context, and summarization. The benchmark evaluates text models.

More evaluations to explore

Related benchmarks in the same category

View all language
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

language
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

language
99 models
MMLU-Redux

An improved version of the MMLU benchmark featuring manually re-annotated questions to identify and correct errors in the original dataset. Provides more reliable evaluation metrics for language models by addressing dataset quality issues found in the original MMLU.

language
45 models
MMMLU

Multilingual Massive Multitask Language Understanding dataset released by OpenAI, featuring professionally translated MMLU test questions across 14 languages including Arabic, Bengali, German, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Swahili, Yoruba, and Chinese. Contains approximately 15,908 multiple-choice questions per language covering 57 subjects.

language
45 models
nolima
long context
44 models
MMLU-ProX

Extended version of MMLU-Pro providing additional challenging multiple-choice questions for evaluating language models across diverse academic and professional domains. Built on the foundation of the Massive Multitask Language Understanding benchmark framework.

language
29 models