Gemma 3B IT GGUF
This is a GGUF (GPT-Generated Unified Format) version of the Gemma 3B Instruct model.
Usage
You can use this model with llama.cpp, Ollama, or other GGUF-compatible inference engines.
With llama.cpp:
./main -m gemma-3b-it.gguf -p "Your prompt here"
With Ollama:
ollama create gemma-3b-it -f Modelfile
ollama run gemma-3b-it
Model Details
- Base Model: Google Gemma 3B Instruct
- Format: GGUF
- File Size: 0.00 GB
- Downloads last month
- 22
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support