How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="safe049/SmolTuring-8B-Instruct",
	filename="unsloth.Q4_K_M.gguf",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": "What is the capital of France?"
		}
	]
)

Uploaded model

  • Developed by: safe049
  • License: apache-2.0
  • Finetuned from model : safe049/SmolLumi-8B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
40
Safetensors
Model size
8B params
Tensor type
F16
Β·
Inference Providers NEW
Input a message to start chatting with safe049/SmolTuring-8B-Instruct.

Model tree for safe049/SmolTuring-8B-Instruct

Quantized
(5)
this model
Quantizations
2 models

Spaces using safe049/SmolTuring-8B-Instruct 15