How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="sugoitoolkit/Sugoi-32B-Ultra-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "\"Меня зовут Вольфганг и я живу в Берлине\""
)

Sugoi LLM 32B Ultra (GGUF version)

Unleashing the full potential of the previous sugoi 32B model, Sugoi 32B Ultra. Benchmark soon.

Downloads last month
243,465
GGUF
Model size
33B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

2-bit

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sugoitoolkit/Sugoi-32B-Ultra-GGUF

Base model

Qwen/Qwen2.5-32B
Quantized
(139)
this model
Quantizations
1 model

Collection including sugoitoolkit/Sugoi-32B-Ultra-GGUF