MRCR v2
MRCR v2 (Multi-Round Coreference Resolution version 2) is an enhanced version of the synthetic long-context reasoning task. It extends the original MRCR framework with improved evaluation criteria and additional complexity for testing models' ability to maintain attention and reasoning across extended contexts.
Progress Over Time
Interactive timeline showing model performance evolution on MRCR v2
State-of-the-art frontier
Open
Proprietary
MRCR v2 Leaderboard
1 models • 0 verified
| Context | Cost | License | ||||
|---|---|---|---|---|---|---|
1 | Google | 0.166 | — | 1.0M | $0.10 $0.40 |
Notice missing or incorrect data?Start an Issue discussion→
FAQ
Common questions about MRCR v2
MRCR v2 (Multi-Round Coreference Resolution version 2) is an enhanced version of the synthetic long-context reasoning task. It extends the original MRCR framework with improved evaluation criteria and additional complexity for testing models' ability to maintain attention and reasoning across extended contexts.
The MRCR v2 paper is available at https://arxiv.org/abs/2409.12640. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The MRCR v2 leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Gemini 2.5 Flash-Lite by Google leads with a score of 0.166. The average score across all models is 0.166.
The highest MRCR v2 score is 0.166, achieved by Gemini 2.5 Flash-Lite from Google.
1 models have been evaluated on the MRCR v2 benchmark, with 0 verified results and 1 self-reported results.
MRCR v2 is categorized under general, long context, and reasoning. The benchmark evaluates text models.