How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf safe049/SmolTuring-8B-Instruct:Q4_K_M
# Run inference directly in the terminal:
llama-cli -hf safe049/SmolTuring-8B-Instruct:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf safe049/SmolTuring-8B-Instruct:Q4_K_M
# Run inference directly in the terminal:
llama-cli -hf safe049/SmolTuring-8B-Instruct:Q4_K_M
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf safe049/SmolTuring-8B-Instruct:Q4_K_M
# Run inference directly in the terminal:
./llama-cli -hf safe049/SmolTuring-8B-Instruct:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf safe049/SmolTuring-8B-Instruct:Q4_K_M
# Run inference directly in the terminal:
./build/bin/llama-cli -hf safe049/SmolTuring-8B-Instruct:Q4_K_M
Use Docker
docker model run hf.co/safe049/SmolTuring-8B-Instruct:Q4_K_M
Quick Links

Uploaded model

  • Developed by: safe049
  • License: apache-2.0
  • Finetuned from model : safe049/SmolLumi-8B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
42
Safetensors
Model size
8B params
Tensor type
F16
Β·
Inference Providers NEW
Input a message to start chatting with safe049/SmolTuring-8B-Instruct.

Model tree for safe049/SmolTuring-8B-Instruct

Quantized
(5)
this model
Quantizations
2 models

Spaces using safe049/SmolTuring-8B-Instruct 15