Open-rewrite

OpenRewriteEval is a benchmark for evaluating open-ended rewriting of long-form texts, covering a wide variety of rewriting types expressed through natural language instructions including formality, expansion, conciseness, paraphrasing, and tone and style transfer.

Llama 3.2 3B Instruct from Meta currently leads the Open-rewrite leaderboard with a score of 0.401 across 1 evaluated AI models.

Paper

MetaLlama 3.2 3B Instruct leads with 40.1%.

Progress Over Time

Interactive timeline showing model performance evolution on Open-rewrite

State-of-the-art frontier
Open
Proprietary

Open-rewrite Leaderboard

1 models
ContextCostLicense
13B128K$0.01 / $0.02
Notice missing or incorrect data?

FAQ

Common questions about Open-rewrite.

What is the Open-rewrite benchmark?

OpenRewriteEval is a benchmark for evaluating open-ended rewriting of long-form texts, covering a wide variety of rewriting types expressed through natural language instructions including formality, expansion, conciseness, paraphrasing, and tone and style transfer.

What is the Open-rewrite leaderboard?

The Open-rewrite leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Llama 3.2 3B Instruct by Meta leads with a score of 0.401. The average score across all models is 0.401.

What is the highest Open-rewrite score?

The highest Open-rewrite score is 0.401, achieved by Llama 3.2 3B Instruct from Meta.

How many models are evaluated on Open-rewrite?

1 models have been evaluated on the Open-rewrite benchmark, with 0 verified results and 1 self-reported results.

Where can I find the Open-rewrite paper?

The Open-rewrite paper is available at https://arxiv.org/abs/2305.15685. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does Open-rewrite cover?

Open-rewrite is categorized under language and writing. The benchmark evaluates text models.

More evaluations to explore

Related benchmarks in the same category

View all language
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

language
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

language
99 models
MMLU-Redux

An improved version of the MMLU benchmark featuring manually re-annotated questions to identify and correct errors in the original dataset. Provides more reliable evaluation metrics for language models by addressing dataset quality issues found in the original MMLU.

language
45 models
MMMLU

Multilingual Massive Multitask Language Understanding dataset released by OpenAI, featuring professionally translated MMLU test questions across 14 languages including Arabic, Bengali, German, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Swahili, Yoruba, and Chinese. Contains approximately 15,908 multiple-choice questions per language covering 57 subjects.

language
45 models
MMLU-ProX

Extended version of MMLU-Pro providing additional challenging multiple-choice questions for evaluating language models across diverse academic and professional domains. Built on the foundation of the Massive Multitask Language Understanding benchmark framework.

language
29 models
Arena Hard

Arena-Hard-Auto is an automatic evaluation benchmark for instruction-tuned LLMs consisting of 500 challenging real-world prompts curated by BenchBuilder. It includes open-ended software engineering problems, mathematical questions, and creative writing tasks. The benchmark uses LLM-as-a-Judge methodology with GPT-4.1 and Gemini-2.5 as automatic judges to approximate human preference. Arena-Hard achieves 98.6% correlation with human preference rankings and provides 3x higher separation of model performances compared to MT-Bench, making it highly effective for distinguishing between models of similar quality.

writing
26 models