How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="NightForger/avibe-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = [
		{
			"role": "user",
			"content": "What is the capital of France?"
		}
	]
)

AvitoTech/avibe, Russian SFT-tune of qwen3-8b-base model with custom tokenizer [GGUF edition]

It is just fast GGUF version of this model.

Code example:

# Please, use vllm or exl2
# ะฃัั‚ะฐะฝะพะฒะบะฐ ะฝะตะพะฑั…ะพะดะธะผั‹ั… ะฑะธะฑะปะธะพั‚ะตะบ
#!pip install llama-cpp-python huggingface_hub

# ะ˜ะผะฟะพั€ั‚ะธั€ัƒะตะผ ะฝะตะพะฑั…ะพะดะธะผั‹ะต ะผะพะดัƒะปะธ
from llama_cpp import Llama
from huggingface_hub import hf_hub_download

# ะฃะบะฐะทั‹ะฒะฐะตะผ ะธะดะตะฝั‚ะธั„ะธะบะฐั‚ะพั€ ั€ะตะฟะพะทะธั‚ะพั€ะธั ะธ ะธะผั ั„ะฐะนะปะฐ ะผะพะดะตะปะธ
MODEL_REPO = "NightForger/avibe-GGUF"
MODEL_FILENAME = "model_Q4_K_M.gguf"

# ะกะบะฐั‡ะธะฒะฐะตะผ ะผะพะดะตะปัŒ ะธะท Hugging Face Hub
model_path = hf_hub_download(repo_id=MODEL_REPO, filename=MODEL_FILENAME)

# ะ˜ะฝะธั†ะธะฐะปะธะทะธั€ัƒะตะผ ะผะพะดะตะปัŒ
llm = Llama(model_path=model_path, n_threads=8)

# ะะฐัั‚ั€ะพะนะบะฐ ะฟะฐั€ะฐะผะตั‚ั€ะพะฒ ะณะตะฝะตั€ะฐั†ะธะธ
generation_config = {
    "max_tokens": 256,
    "temperature": 0.7,
    "top_p": 0.9,
    "repeat_penalty": 1.1,
}

# ะกะธัั‚ะตะผะฝะพะต ัะพะพะฑั‰ะตะฝะธะต (ะพะฟะธัะฐะฝะธะต ะฟะตั€ัะพะฝะฐะถะฐ)
system_prompt = """ะขั‹ ั‚ะพั‚ ัะฐะผั‹ะน ะฑะฐะฝั‰ะธะบ. ะ›ะตะณะตะฝะดะฐั€ะฝั‹ะน ะฑะฐะฝั‰ะธะบ ัะพ ัะฒะพะธะผะธ ะปะตะณะตะฝะดะฐั€ะฝั‹ะผะธ ะฐะฝะตะบะดะพั‚ะฐะผะธ ะฒ ะผัƒะถัะบะพะต ะฑะฐะฝะต. ะจัƒั‚ะบะธ ั‡ั‘ั€ะฝั‹ะต ะธ ัะผะตัˆะฝั‹ะต."""

# ะ’ะพะฟั€ะพั ะฟะพะปัŒะทะพะฒะฐั‚ะตะปั
user_question = "ะŸั€ะธะฒะตั‚! ะœะพะถะตัˆัŒ ั€ะฐััะบะฐะทะฐั‚ัŒ ะผะฝะต ะบะพั€ะพั‚ะบะธะน, ะฝะพ ัะผะตัˆะฝะพะน ะฐะฝะตะบะดะพั‚?"

# ะคะพั€ะผะธั€ะพะฒะฐะฝะธะต ัะพะพะฑั‰ะตะฝะธะน ะฒ ั„ะพั€ะผะฐั‚ะต ั‡ะฐั‚ะฐ
messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": user_question},
]

# ะ“ะตะฝะตั€ะฐั†ะธั ะพั‚ะฒะตั‚ะฐ
response = llm.create_chat_completion(
    messages=messages,
    max_tokens=generation_config["max_tokens"],
    temperature=generation_config["temperature"],
    top_p=generation_config["top_p"],
    repeat_penalty=generation_config["repeat_penalty"],
)

# ะ˜ะทะฒะปะตั‡ะตะฝะธะต ัะณะตะฝะตั€ะธั€ะพะฒะฐะฝะฝะพะณะพ ั‚ะตะบัั‚ะฐ
generated_text = response['choices'][0]['message']['content'].strip()

# ะ’ั‹ะฒะพะดะธะผ ั€ะตะทัƒะปัŒั‚ะฐั‚
print(f"ะ’ะพะฟั€ะพั: {user_question}")
print(f"ะžั‚ะฒะตั‚: {generated_text}")

Output example

ะ’ะพะฟั€ะพั: ะŸั€ะธะฒะตั‚! ะœะพะถะตัˆัŒ ั€ะฐััะบะฐะทะฐั‚ัŒ ะผะฝะต ะบะพั€ะพั‚ะบะธะน, ะฝะพ ัะผะตัˆะฝะพะน ะฐะฝะตะบะดะพั‚?
ะžั‚ะฒะตั‚: โ€” ะ”ะพะบั‚ะพั€, ั ะฝะต ะผะพะณัƒ ะถะธั‚ัŒ ะฑะตะท ะฒะพะดั‹! โ€” ะบั€ะธั‡ะธั‚ ะฟะฐั†ะธะตะฝั‚.  
โ€” ะ’ั‹ ัะตั€ัŒั‘ะทะฝะพ? โ€” ะพั‚ะฒะตั‡ะฐะตั‚ ะฒั€ะฐั‡. โ€” ะ ั‡ั‚ะพ ะฒั‹ ะฑัƒะดะตั‚ะต ะดะตะปะฐั‚ัŒ ั ั‚ะตะผ ะผัƒะถั‡ะธะฝะพะน ะฒ ะฒะฐัˆะตะน ะถะธะทะฝะธ, ะบะพั‚ะพั€ั‹ะน ั‚ะพะถะต ะฝะต ะผะพะถะตั‚ ะถะธั‚ัŒ ะฑะตะท... ะฒะพะดะบะธ?  

(ะšะปะฐััะธะบะฐ ะถะฐะฝั€ะฐ: ะฒะพะดะฐ vs ะฒะพะดะพั‡ะบะฐ โ€” ะบะปะฐััะธะบะฐ ะผัƒะถัะบะพะณะพ ะฑะฐะฝะฝะพะณะพ ัŽะผะพั€ะฐ!)
Downloads last month
67
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for NightForger/avibe-GGUF

Finetuned
Qwen/Qwen3-8B
Finetuned
AvitoTech/avibe
Quantized
(1)
this model