Google logo

MedGemma 4B IT

Overview

Overview

MedGemma is a collection of Gemma 3 variants that are trained for performance on medical text and image comprehension. MedGemma 4B utilizes a SigLIP image encoder that has been specifically pre-trained on a variety of de-identified medical data, including chest X-rays, dermatology images, ophthalmology images, and histopathology slides. Its LLM component is trained on a diverse set of medical data, including radiology images, histopathology patches, ophthalmology images, and dermatology images. MedGemma is a multimodal model primarily evaluated on single-image tasks. It has not been evaluated for multi-turn applications and may be more sensitive to specific prompts than its predecessor, Gemma 3. Developers should consider bias in validation data and data contamination concerns when using MedGemma.

MedGemma 4B IT was released on May 20, 2025.

Performance

Timeline

ReleasedUnknown
Knowledge CutoffUnknown

Specifications

Parameters
4.3B
License
Health AI Developer Foundations terms of use
Training Data
Unknown
Tags
tuning:instruct

Benchmarks

Benchmarks

MedGemma 4B IT Performance Across Datasets

Scores sourced from the model's scorecard, paper, or official blog posts

LLM Stats Logollm-stats.com - Wed Jan 21 2026
Notice missing or incorrect data?Start an Issue discussion

Pricing

Pricing

Pricing, performance, and capabilities for MedGemma 4B IT across different providers:

No pricing information available for this model.

API Access

API Access Coming Soon

API access for MedGemma 4B IT will be available soon through our gateway.

Recent Posts

Recent Reviews

FAQ

Common questions about MedGemma 4B IT

MedGemma 4B IT was released on May 20, 2025 by Google.
MedGemma 4B IT was created by Google.
MedGemma 4B IT has 4.3 billion parameters.
MedGemma 4B IT is released under the Health AI Developer Foundations terms of use license.
Yes, MedGemma 4B IT is a multimodal model that can process both text and images as input.