How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf S4MPL3BI4S/gemma4-coding-agent:BF16# Run inference directly in the terminal:
llama-cli -hf S4MPL3BI4S/gemma4-coding-agent:BF16Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf S4MPL3BI4S/gemma4-coding-agent:BF16# Run inference directly in the terminal:
./llama-cli -hf S4MPL3BI4S/gemma4-coding-agent:BF16Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf S4MPL3BI4S/gemma4-coding-agent:BF16# Run inference directly in the terminal:
./build/bin/llama-cli -hf S4MPL3BI4S/gemma4-coding-agent:BF16Use Docker
docker model run hf.co/S4MPL3BI4S/gemma4-coding-agent:BF16Quick Links
gemma4-coding-agent : GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs:
llama-cli -hf S4MPL3BI4S/gemma4-coding-agent --jinja - For multimodal models:
llama-mtmd-cli -hf S4MPL3BI4S/gemma4-coding-agent --jinja
Available Model files:
gemma-4-E4B-it.Q4_K_M.ggufgemma-4-E4B-it.BF16-mmproj.gguf
⚠️ Ollama Note for Vision Models
Important: Ollama currently does not support separate mmproj files for vision models.
To create an Ollama model from this vision model:
- Place the
Modelfilein the same directory as the finetuned bf16 merged model - Run:
ollama create model_name -f ./Modelfile(Replacemodel_namewith your desired name)
This will create a unified bf16 model that Ollama can use.
This was trained 2x faster with Unsloth

- Downloads last month
- 5,618
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf S4MPL3BI4S/gemma4-coding-agent:BF16# Run inference directly in the terminal: llama-cli -hf S4MPL3BI4S/gemma4-coding-agent:BF16