MedGemma 1.5 4B Llamafile
Llamafile for google/medgemma-1.5-4b-it.
Note: Gemma3 multimodal isn't working in Llamafile 0.9.3. This is text-only for now.
Use
On Windows, add .exe to the llamafile and run as a program.
On Mac or Linux, open up a terminal and run:
# Start server at 127.0.0.1:8080
./medgemma.llamafile
# Or override the defaults
./medgemma.llamafile --port 9090 --n-gpu-layers 99
For more info:
- https://mozilla-ai.github.io/llamafile/quickstart/
- https://mozilla-ai.github.io/llamafile/troubleshooting/
Provenance
- Base executable: llamafile 0.9.3
- Model weights: unsloth/medgemma-1.5-4b-it:UD-Q4_K_XL
The use of this model is governed by the Health AI Developer Foundations terms of use.
- Downloads last month
- 27
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for spicyneuron/medgemma-1.5-4b-it-llamafile
Base model
google/medgemma-1.5-4b-it