How to use from
Pi
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf xensive/model:Q4_K_M
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
  "providers": {
    "llama-cpp": {
      "baseUrl": "http://localhost:8080/v1",
      "api": "openai-completions",
      "apiKey": "none",
      "models": [
        {
          "id": "xensive/model:Q4_K_M"
        }
      ]
    }
  }
}
Run Pi
# Start Pi in your project directory:
pi
Quick Links

model : GGUF

This model was finetuned and converted to GGUF format using Unsloth.

Example usage:

  • For text only LLMs: ./llama.cpp/llama-cli -hf xensive/model --jinja
  • For multimodal models: ./llama.cpp/llama-mtmd-cli -hf xensive/model --jinja

Available Model files:

  • llama-3.2-3b.Q4_K_M.gguf This was trained 2x faster with Unsloth
Downloads last month
2
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support