MobileMiniWob++_SR

MobileMiniWob++ SR (Success Rate) is an adaptation of the MiniWob++ web interaction benchmark for mobile Android environments within AndroidWorld. It comprises 92 web interaction tasks adapted for touch-based mobile interfaces, evaluating agents' ability to navigate and interact with web applications on mobile devices.

Qwen2.5 VL 7B Instruct from Alibaba Cloud / Qwen Team currently leads the MobileMiniWob++_SR leaderboard with a score of 0.914 across 2 evaluated AI models.

Paper

Alibaba Cloud / Qwen TeamQwen2.5 VL 7B Instruct leads with 91.4%, followed by Alibaba Cloud / Qwen TeamQwen2.5 VL 72B Instruct at 68.0%.

Progress Over Time

Interactive timeline showing model performance evolution on MobileMiniWob++_SR

State-of-the-art frontier
Open
Proprietary

MobileMiniWob++_SR Leaderboard

2 models
ContextCostLicense
1
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
8B
2
Alibaba Cloud / Qwen Team
Alibaba Cloud / Qwen Team
72B
Notice missing or incorrect data?

FAQ

Common questions about MobileMiniWob++_SR.

What is the MobileMiniWob++_SR benchmark?

MobileMiniWob++ SR (Success Rate) is an adaptation of the MiniWob++ web interaction benchmark for mobile Android environments within AndroidWorld. It comprises 92 web interaction tasks adapted for touch-based mobile interfaces, evaluating agents' ability to navigate and interact with web applications on mobile devices.

What is the MobileMiniWob++_SR leaderboard?

The MobileMiniWob++_SR leaderboard ranks 2 AI models based on their performance on this benchmark. Currently, Qwen2.5 VL 7B Instruct by Alibaba Cloud / Qwen Team leads with a score of 0.914. The average score across all models is 0.797.

What is the highest MobileMiniWob++_SR score?

The highest MobileMiniWob++_SR score is 0.914, achieved by Qwen2.5 VL 7B Instruct from Alibaba Cloud / Qwen Team.

How many models are evaluated on MobileMiniWob++_SR?

2 models have been evaluated on the MobileMiniWob++_SR benchmark, with 0 verified results and 2 self-reported results.

Where can I find the MobileMiniWob++_SR paper?

The MobileMiniWob++_SR paper is available at https://arxiv.org/abs/2405.14573. The paper details the methodology, dataset construction, and evaluation criteria.

What categories does MobileMiniWob++_SR cover?

MobileMiniWob++_SR is categorized under frontend development, multimodal, and agents. The benchmark evaluates multimodal models.

More evaluations to explore

Related benchmarks in the same category

View all frontend development
SWE-Bench Verified

A verified subset of 500 software engineering problems from real GitHub issues, validated by human annotators for evaluating language models' ability to resolve real-world coding issues by generating patches for Python codebases.

frontend development
89 models
MMMU

MMMU (Massive Multi-discipline Multimodal Understanding) is a benchmark designed to evaluate multimodal models on college-level subject knowledge and deliberate reasoning. Contains 11.5K meticulously collected multimodal questions from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering across 30 subjects and 183 subfields.

multimodalmultimodal
62 models
MMMU-Pro

A more robust multi-discipline multimodal understanding benchmark that enhances MMMU through a three-step process: filtering text-only answerable questions, augmenting candidate options, and introducing vision-only input settings. Achieves significantly lower model performance (16.8-26.9%) compared to original MMMU, providing more rigorous evaluation that closely mimics real-world scenarios.

multimodalmultimodal
48 models
BrowseComp

BrowseComp is a benchmark comprising 1,266 questions that challenge AI agents to persistently navigate the internet in search of hard-to-find, entangled information. The benchmark measures agents' ability to exercise persistence in information gathering, demonstrate creativity in web navigation, and find concise, verifiable answers. Despite the difficulty of the questions, BrowseComp is simple and easy-to-use, as predicted answers are short and easily verifiable against reference answers.

agents
45 models
Terminal-Bench 2.0

Terminal-Bench 2.0 is an updated benchmark for testing AI agents' tool use ability to operate a computer via terminal. It evaluates how well models can handle real-world, end-to-end tasks autonomously, including compiling code, training models, setting up servers, system administration, security tasks, data science workflows, and cybersecurity vulnerabilities.

agents
39 models
MathVista

MathVista evaluates mathematical reasoning of foundation models in visual contexts. It consists of 6,141 examples derived from 28 existing multimodal datasets and 3 newly created datasets (IQTest, FunctionQA, and PaperQA), combining challenges from diverse mathematical and visual tasks to assess models' ability to understand complex figures and perform rigorous reasoning.

multimodalmultimodal
36 models