Translation Set1→en COMET22

COMET-22 is a neural machine translation evaluation metric that uses an ensemble of two models: a COMET estimator trained with Direct Assessments and a multitask model that predicts sentence-level scores and word-level OK/BAD tags. It provides improved correlations with human judgments and increased robustness to critical errors compared to previous metrics.

Nova Pro from Amazon currently leads the Translation Set1→en COMET22 leaderboard with a score of 0.890 across 3 evaluated AI models.

Paper

AmazonNova Pro leads with 89.0%, followed by AmazonNova Lite at 88.8% and AmazonNova Micro at 88.7%.

Progress Over Time

Interactive timeline showing model performance evolution on Translation Set1→en COMET22

State-of-the-art frontier
Open
Proprietary

Translation Set1→en COMET22 Leaderboard

3 models
ContextCostLicense
1
Amazon
Amazon
300K$0.80 / $3.20
2
Amazon
Amazon
300K$0.06 / $0.24
3128K$0.03 / $0.14
Notice missing or incorrect data?

FAQ

Common questions about Translation Set1→en COMET22.

What is the Translation Set1→en COMET22 benchmark?

COMET-22 is a neural machine translation evaluation metric that uses an ensemble of two models: a COMET estimator trained with Direct Assessments and a multitask model that predicts sentence-level scores and word-level OK/BAD tags. It provides improved correlations with human judgments and increased robustness to critical errors compared to previous metrics.

What is the Translation Set1→en COMET22 leaderboard?

The Translation Set1→en COMET22 leaderboard ranks 3 AI models based on their performance on this benchmark. Currently, Nova Pro by Amazon leads with a score of 0.890. The average score across all models is 0.888.

What is the highest Translation Set1→en COMET22 score?

The highest Translation Set1→en COMET22 score is 0.890, achieved by Nova Pro from Amazon.

How many models are evaluated on Translation Set1→en COMET22?

3 models have been evaluated on the Translation Set1→en COMET22 benchmark, with 0 verified results and 3 self-reported results.

Where can I find the Translation Set1→en COMET22 paper?

The Translation Set1→en COMET22 paper is available at https://aclanthology.org/2022.wmt-1.52/. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does Translation Set1→en COMET22 cover?

Translation Set1→en COMET22 is categorized under language. The benchmark evaluates text models with multilingual support.

More evaluations to explore

Related benchmarks in the same category

View all language
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

language
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

language
99 models
MMLU-Redux

An improved version of the MMLU benchmark featuring manually re-annotated questions to identify and correct errors in the original dataset. Provides more reliable evaluation metrics for language models by addressing dataset quality issues found in the original MMLU.

language
45 models
MMMLU

Multilingual Massive Multitask Language Understanding dataset released by OpenAI, featuring professionally translated MMLU test questions across 14 languages including Arabic, Bengali, German, Spanish, French, Hindi, Indonesian, Italian, Japanese, Korean, Portuguese, Swahili, Yoruba, and Chinese. Contains approximately 15,908 multiple-choice questions per language covering 57 subjects.

language
45 models
MMLU-ProX

Extended version of MMLU-Pro providing additional challenging multiple-choice questions for evaluating language models across diverse academic and professional domains. Built on the foundation of the Massive Multitask Language Understanding benchmark framework.

language
29 models
BIG-Bench Hard

BIG-Bench Hard (BBH) is a subset of 23 challenging BIG-Bench tasks selected because prior language model evaluations did not outperform average human-rater performance. The benchmark contains 6,511 evaluation examples testing various forms of multi-step reasoning including arithmetic, logical reasoning (Boolean expressions, logical deduction), geometric reasoning, temporal reasoning, and language understanding. Tasks require capabilities such as causal judgment, object counting, navigation, pattern recognition, and complex problem solving.

language
21 models