Benchmarks/agents/OSWorld-Verified

OSWorld-Verified

OSWorld-Verified is a verified subset of OSWorld, a scalable real computer environment for multimodal agents supporting task setup, execution-based evaluation, and interactive learning across Ubuntu, Windows, and macOS.

Paper

Progress Over Time

Interactive timeline showing model performance evolution on OSWorld-Verified

State-of-the-art frontier
Open
Proprietary

OSWorld-Verified Leaderboard

10 models
ContextCostLicense
1$25.00 / $125.00
2
Anthropic
Anthropic
1.0M$5.00 / $25.00
3
OpenAI
OpenAI
1.0M$2.50 / $15.00
4400K$0.75 / $4.50
5400K$1.75 / $14.00
6
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
7
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
122B262K$0.40 / $3.20
8
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
27B262K$0.30 / $2.40
9
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
35B262K$0.25 / $2.00
10400K$0.20 / $1.25
Notice missing or incorrect data?

FAQ

Common questions about OSWorld-Verified

OSWorld-Verified is a verified subset of OSWorld, a scalable real computer environment for multimodal agents supporting task setup, execution-based evaluation, and interactive learning across Ubuntu, Windows, and macOS.
The OSWorld-Verified paper is available at https://arxiv.org/abs/2404.07972. This paper provides detailed information about the benchmark methodology, dataset creation, and evaluation criteria.
The OSWorld-Verified leaderboard ranks 10 AI models based on their performance on this benchmark. Currently, Claude Mythos Preview by Anthropic leads with a score of 0.796. The average score across all models is 0.640.
The highest OSWorld-Verified score is 0.796, achieved by Claude Mythos Preview from Anthropic.
10 models have been evaluated on the OSWorld-Verified benchmark, with 0 verified results and 10 self-reported results.
OSWorld-Verified is categorized under agents, general, multimodal, and vision. The benchmark evaluates multimodal models.