StepFun logo

Step-3.5-Flash

Overview

Overview

Step-3.5-Flash is StepFun's fast, cost-effective multimodal model optimized for quick inference. Built on their Step3 architecture, it offers strong performance across text and vision tasks with low latency and efficient token usage, ideal for production workloads requiring speed and cost efficiency.

Step-3.5-Flash was released on February 2, 2026. API access is available through StepFun.

Performance

Timeline

ReleasedUnknown
Knowledge CutoffUnknown

Specifications

Parameters
196.0B
License
Apache 2.0
Training Data
Unknown

Benchmarks

Benchmarks

Step-3.5-Flash Performance Across Datasets

Scores sourced from the model's scorecard, paper, or official blog posts

LLM Stats Logollm-stats.com - Thu Feb 05 2026
Notice missing or incorrect data?Start an Issue discussion

Pricing

Pricing

Pricing, performance, and capabilities for Step-3.5-Flash across different providers:

ProviderInput ($/M)Output ($/M)Max InputMax OutputLatency (s)ThroughputQuantizationInputOutput
StepFun logo
StepFun
$0.10$0.4065.5K8.2K
0.3
150.0 c/s
Text
Image
Audio
Video
Text
Image
Audio
Video

API Access

API Access Coming Soon

API access for Step-3.5-Flash will be available soon through our gateway.

Recent Posts

Recent Reviews

FAQ

Common questions about Step-3.5-Flash

Step-3.5-Flash was released on February 2, 2026 by StepFun.
Step-3.5-Flash was created by StepFun.
Step-3.5-Flash has 196.0 billion parameters.
Step-3.5-Flash is released under the Apache 2.0 license. This is an open-source/open-weight license.
Yes, Step-3.5-Flash is a multimodal model that can process both text and images as input.