MedGemma-4B-IT โ€” MLX 4-bit

Converted from google/medgemma-4b-it for Apple MLX.

Usage

python -m mlx_vlm.generate \
  --model YOUR_USERNAME/medgemma-4b-it-mlx-4bit \
  --prompt "Describe this image." \
  --image path/to.jpg \
  --max-tokens 128
Downloads last month
17
Safetensors
Model size
0.9B params
Tensor type
BF16
ยท
U32
ยท
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for arkaprovob/medgemma-4b-it-mlx-4bit

Finetuned
(549)
this model