Benchmarks/reasoning/SWE-bench Verified (Multiple Attempts)

SWE-bench Verified (Multiple Attempts)

SWE-bench Verified is a human-validated subset of 500 test samples from the original SWE-bench dataset that evaluates AI systems' ability to automatically resolve real GitHub issues in Python repositories. Given a codebase and issue description, models must edit the code to successfully resolve the problem, requiring understanding and coordination of changes across multiple functions, classes, and files. The Verified version provides more reliable evaluation through manual validation of test samples.

Paper

Progress Over Time

Interactive timeline showing model performance evolution on SWE-bench Verified (Multiple Attempts)

State-of-the-art frontier
Open
Proprietary

SWE-bench Verified (Multiple Attempts) Leaderboard

1 models • 0 verified
ContextCostLicense
1
Moonshot AI
Moonshot AI
1.0T200K
$0.50
$0.50
Notice missing or incorrect data?

FAQ

Common questions about SWE-bench Verified (Multiple Attempts)

SWE-bench Verified is a human-validated subset of 500 test samples from the original SWE-bench dataset that evaluates AI systems' ability to automatically resolve real GitHub issues in Python repositories. Given a codebase and issue description, models must edit the code to successfully resolve the problem, requiring understanding and coordination of changes across multiple functions, classes, and files. The Verified version provides more reliable evaluation through manual validation of test samples.
The SWE-bench Verified (Multiple Attempts) paper is available at https://arxiv.org/abs/2310.06770. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The SWE-bench Verified (Multiple Attempts) leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Kimi K2 Instruct by Moonshot AI leads with a score of 0.716. The average score across all models is 0.716.
The highest SWE-bench Verified (Multiple Attempts) score is 0.716, achieved by Kimi K2 Instruct from Moonshot AI.
1 models have been evaluated on the SWE-bench Verified (Multiple Attempts) benchmark, with 0 verified results and 1 self-reported results.
SWE-bench Verified (Multiple Attempts) is categorized under reasoning. The benchmark evaluates text models.