How to use from
Pi
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf hitty28/functiongemma-gguf:Q8_0
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
  "providers": {
    "llama-cpp": {
      "baseUrl": "http://localhost:8080/v1",
      "api": "openai-completions",
      "apiKey": "none",
      "models": [
        {
          "id": "hitty28/functiongemma-gguf:Q8_0"
        }
      ]
    }
  }
}
Run Pi
# Start Pi in your project directory:
pi
Quick Links

functiongemma-gguf : GGUF

This model was finetuned and converted to GGUF format using Unsloth.

Example usage:

  • For text only LLMs: ./llama.cpp/llama-cli -hf hitty28/functiongemma-gguf --jinja
  • For multimodal models: ./llama.cpp/llama-mtmd-cli -hf hitty28/functiongemma-gguf --jinja

Available Model files:

  • functiongemma-270m-it.Q8_0.gguf

Note

The model's BOS token behavior was adjusted for GGUF compatibility. This was trained 2x faster with Unsloth

Downloads last month
10
GGUF
Model size
0.3B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support