DeepSeek VL2 Tiny
Overview
Overview
An advanced series of large Mixture-of-Experts (MoE) Vision-Language Models that significantly improves upon its predecessor, DeepSeek-VL. DeepSeek-VL2 demonstrates superior capabilities across various tasks, including but not limited to visual question answering, optical character recognition, document/table/chart understanding, and visual grounding.
DeepSeek VL2 Tiny was released on December 13, 2024.
Performance
Timeline
ReleasedUnknown
Knowledge CutoffUnknown
Specifications
Parameters
3.0B
License
deepseek
Training Data
Unknown
Benchmarks
Benchmarks
DeepSeek VL2 Tiny Performance Across Datasets
Scores sourced from the model's scorecard, paper, or official blog posts
Notice missing or incorrect data?Start an Issue discussion→
Pricing
Pricing
Pricing, performance, and capabilities for DeepSeek VL2 Tiny across different providers:
No pricing information available for this model.
API Access
API Access Coming Soon
API access for DeepSeek VL2 Tiny will be available soon through our gateway.
Recent Posts
Recent Reviews
FAQ
Common questions about DeepSeek VL2 Tiny
DeepSeek VL2 Tiny was released on December 13, 2024 by DeepSeek.
DeepSeek VL2 Tiny was created by DeepSeek.
DeepSeek VL2 Tiny has 3.0 billion parameters.
DeepSeek VL2 Tiny is released under the deepseek license.
Yes, DeepSeek VL2 Tiny is a multimodal model that can process both text and images as input.