Qwen logo

Qwen3 VL 8B Thinking

Overview

Qwen3-VL is a large multimodal model that unifies vision, language, and reasoning to achieve human-level perception and cognition across text, images, and video. Built on a 235B-parameter architecture, it integrates early joint training of visual and textual modalities for strong language grounding. The model supports up to a 1 million-token context window and excels at visual understanding, spatial reasoning, long video comprehension, and tool-based interaction. It can generate code from images, perform precise 2D/3D object grounding, and operate digital interfaces like a visual agent. The “Instruct” version rivals Gemini 2.5 Pro in perception benchmarks, while the “Thinking” version leads in multimodal reasoning and STEM tasks. With multilingual OCR, creative writing, and fine-grained scene interpretation, Qwen3-VL establishes a new open-source frontier for integrated vision-language intelligence.

Qwen3 VL 8B Thinking was released on September 22, 2025. API access is available through DeepInfra.

Performance

Timeline

ReleasedUnknown
Knowledge CutoffUnknown

Specifications

Parameters
9.0B
License
Apache 2.0
Training Data
Unknown
Tags
thinking:true

Benchmarks

Qwen3 VL 8B Thinking Performance Across Datasets

Scores sourced from the model's scorecard, paper, or official blog posts

LLM Stats Logollm-stats.com - Fri Jan 02 2026
Notice missing or incorrect data?Start an Issue discussion

Pricing

Pricing, performance, and capabilities for Qwen3 VL 8B Thinking across different providers:

ProviderInput ($/M)Output ($/M)Max InputMax OutputLatency (s)ThroughputQuantizationInputOutput
DeepInfra logo
DeepInfrafp8
$0.18$2.09262.1K262.1K
fp8
Text
Image
Audio
Video
Text
Image
Audio
Video

API Access

API Access Coming Soon

API access for Qwen3 VL 8B Thinking will be available soon through our gateway.

Recent Posts

Recent Reviews

FAQ

Common questions about Qwen3 VL 8B Thinking

Qwen3 VL 8B Thinking was released on September 22, 2025.
Qwen3 VL 8B Thinking has 9.0 billion parameters.