WMT23

The Eighth Conference on Machine Translation (WMT23) benchmark evaluating machine translation systems across 8 language pairs (14 translation directions) including general, biomedical, literary, and low-resource language translation tasks. Features specialized shared tasks for quality estimation, metrics evaluation, sign language translation, and discourse-level literary translation with professional human assessment.

Gemini 1.5 Pro from Google currently leads the WMT23 leaderboard with a score of 0.751 across 4 evaluated AI models.

Paper

GoogleGemini 1.5 Pro leads with 75.1%, followed by GoogleGemini 1.5 Flash at 74.1% and GoogleGemini 1.5 Flash 8B at 72.6%.

Progress Over Time

Interactive timeline showing model performance evolution on WMT23

State-of-the-art frontier
Open
Proprietary

WMT23 Leaderboard

4 models
ContextCostLicense
12.1M$2.50 / $10.00
21.0M$0.15 / $0.60
38B1.0M$0.07 / $0.30
433K$0.50 / $1.50
Notice missing or incorrect data?

FAQ

Common questions about WMT23.

What is the WMT23 benchmark?

The Eighth Conference on Machine Translation (WMT23) benchmark evaluating machine translation systems across 8 language pairs (14 translation directions) including general, biomedical, literary, and low-resource language translation tasks. Features specialized shared tasks for quality estimation, metrics evaluation, sign language translation, and discourse-level literary translation with professional human assessment.

What is the WMT23 leaderboard?

The WMT23 leaderboard ranks 4 AI models based on their performance on this benchmark. Currently, Gemini 1.5 Pro by Google leads with a score of 0.751. The average score across all models is 0.734.

What is the highest WMT23 score?

The highest WMT23 score is 0.751, achieved by Gemini 1.5 Pro from Google.

How many models are evaluated on WMT23?

4 models have been evaluated on the WMT23 benchmark, with 0 verified results and 3 self-reported results.

Where can I find the WMT23 paper?

The WMT23 paper is available at https://aclanthology.org/2023.wmt-1.1/. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does WMT23 cover?

WMT23 is categorized under healthcare and language. The benchmark evaluates text models with multilingual support.

More evaluations to explore

Related benchmarks in the same category

View all healthcare
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

healthcare
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

healthcare
99 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

healthcaremultimodal
62 models
MMLU-Redux

An improved version of the MMLU benchmark featuring manually re-annotated questions to identify and correct errors in the original dataset. Provides more reliable evaluation metrics for language models by addressing dataset quality issues found in the original MMLU.

language
45 models
MMMLU

Multilingual Massive Multitask Language Understanding dataset released by OpenAI, featuring professionally translated MMLU test questions across 14 languages including Arabic, Bengali, German, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Swahili, Yoruba, and Chinese. Contains approximately 15,908 multiple-choice questions per language covering 57 subjects.

language
45 models
SuperGPQA

SuperGPQA is a comprehensive benchmark that evaluates large language models across 285 graduate-level academic disciplines. The benchmark contains 25,957 questions covering 13 broad disciplinary areas including Engineering, Medicine, Science, and Law, with specialized fields in light industry, agriculture, and service-oriented domains. It employs a Human-LLM collaborative filtering mechanism with over 80 expert annotators to create challenging questions that assess graduate-level knowledge and reasoning capabilities.

healthcare
30 models