OSWorld Screenshot-only

OSWorld Screenshot-only: A variant of the OSWorld benchmark that evaluates multimodal AI agents using only screenshot observations to complete open-ended computer tasks across real operating systems (Ubuntu, Windows, macOS). Tests agents' ability to perform complex workflows involving web apps, desktop applications, file I/O, and multi-application tasks through visual interface understanding and GUI grounding.

Claude 3.5 Sonnet from Anthropic currently leads the OSWorld Screenshot-only leaderboard with a score of 0.149 across 1 evaluated AI models.

Paper

AnthropicClaude 3.5 Sonnet leads with 14.9%.

Progress Over Time

Interactive timeline showing model performance evolution on OSWorld Screenshot-only

State-of-the-art frontier
Open
Proprietary

OSWorld Screenshot-only Leaderboard

1 models
ContextCostLicense
1200K$3.00 / $15.00
Notice missing or incorrect data?

FAQ

Common questions about OSWorld Screenshot-only.

What is the OSWorld Screenshot-only benchmark?

OSWorld Screenshot-only: A variant of the OSWorld benchmark that evaluates multimodal AI agents using only screenshot observations to complete open-ended computer tasks across real operating systems (Ubuntu, Windows, macOS). Tests agents' ability to perform complex workflows involving web apps, desktop applications, file I/O, and multi-application tasks through visual interface understanding and GUI grounding.

What is the OSWorld Screenshot-only leaderboard?

The OSWorld Screenshot-only leaderboard ranks 1 AI models based on their performance on this benchmark. Currently, Claude 3.5 Sonnet by Anthropic leads with a score of 0.149. The average score across all models is 0.149.

What is the highest OSWorld Screenshot-only score?

The highest OSWorld Screenshot-only score is 0.149, achieved by Claude 3.5 Sonnet from Anthropic.

How many models are evaluated on OSWorld Screenshot-only?

1 models have been evaluated on the OSWorld Screenshot-only benchmark, with 0 verified results and 1 self-reported results.

Where can I find the OSWorld Screenshot-only paper?

The OSWorld Screenshot-only paper is available at https://arxiv.org/abs/2404.07972. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does OSWorld Screenshot-only cover?

OSWorld Screenshot-only is categorized under agents, general, grounding, multimodal, and vision. The benchmark evaluates multimodal models.

More evaluations to explore

Related benchmarks in the same category

View all agents
GPQA

A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, with PhD experts reaching 65% accuracy.

general
213 models
MMLU-Pro

A more robust and challenging multi-task language understanding benchmark that extends MMLU by expanding multiple-choice options from 4 to 10, eliminating trivial questions, and focusing on reasoning-intensive tasks. Features over 12,000 curated questions across 14 domains and causes a 16-33% accuracy drop compared to original MMLU.

general
119 models
MMLU

Massive Multitask Language Understanding benchmark testing knowledge across 57 diverse subjects including STEM, humanities, social sciences, and professional domains

general
99 models
Humanity's Last Exam

Humanity's Last Exam (HLE) is a multi-modal academic benchmark with 2,500 questions across mathematics, humanities, and natural sciences, designed to test LLM capabilities at the frontier of human knowledge with unambiguous, verifiable solutions

visionmultimodal
74 models
LiveCodeBench

LiveCodeBench is a holistic and contamination-free evaluation benchmark for large language models for code. It continuously collects new problems from programming contests (LeetCode, AtCoder, CodeForces) and evaluates four different scenarios: code generation, self-repair, code execution, and test output prediction. Problems are annotated with release dates to enable evaluation on unseen problems released after a model's training cutoff.

general
71 models
IFEval

Instruction-Following Evaluation (IFEval) benchmark for large language models, focusing on verifiable instructions with 25 types of instructions and around 500 prompts containing one or more verifiable constraints

general
63 models