How to use from
Pi
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf safe049/SmolTuring-8B-Instruct:Q4_K_M
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
  "providers": {
    "llama-cpp": {
      "baseUrl": "http://localhost:8080/v1",
      "api": "openai-completions",
      "apiKey": "none",
      "models": [
        {
          "id": "safe049/SmolTuring-8B-Instruct:Q4_K_M"
        }
      ]
    }
  }
}
Run Pi
# Start Pi in your project directory:
pi
Quick Links

Uploaded model

  • Developed by: safe049
  • License: apache-2.0
  • Finetuned from model : safe049/SmolLumi-8B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
36
Safetensors
Model size
8B params
Tensor type
F16
Β·
Inference Providers NEW
Input a message to start chatting with safe049/SmolTuring-8B-Instruct.

Model tree for safe049/SmolTuring-8B-Instruct

Quantized
(5)
this model
Quantizations
2 models

Spaces using safe049/SmolTuring-8B-Instruct 15