llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)AvitoTech/avibe, Russian SFT-tune of qwen3-8b-base model with custom tokenizer [GGUF edition]
It is just fast GGUF version of this model.
Code example:
# Please, use vllm or exl2
# ะฃััะฐะฝะพะฒะบะฐ ะฝะตะพะฑั
ะพะดะธะผัั
ะฑะธะฑะปะธะพัะตะบ
#!pip install llama-cpp-python huggingface_hub
# ะะผะฟะพััะธััะตะผ ะฝะตะพะฑั
ะพะดะธะผัะต ะผะพะดัะปะธ
from llama_cpp import Llama
from huggingface_hub import hf_hub_download
# ะฃะบะฐะทัะฒะฐะตะผ ะธะดะตะฝัะธัะธะบะฐัะพั ัะตะฟะพะทะธัะพัะธั ะธ ะธะผั ัะฐะนะปะฐ ะผะพะดะตะปะธ
MODEL_REPO = "NightForger/avibe-GGUF"
MODEL_FILENAME = "model_Q4_K_M.gguf"
# ะกะบะฐัะธะฒะฐะตะผ ะผะพะดะตะปั ะธะท Hugging Face Hub
model_path = hf_hub_download(repo_id=MODEL_REPO, filename=MODEL_FILENAME)
# ะะฝะธัะธะฐะปะธะทะธััะตะผ ะผะพะดะตะปั
llm = Llama(model_path=model_path, n_threads=8)
# ะะฐัััะพะนะบะฐ ะฟะฐัะฐะผะตััะพะฒ ะณะตะฝะตัะฐัะธะธ
generation_config = {
"max_tokens": 256,
"temperature": 0.7,
"top_p": 0.9,
"repeat_penalty": 1.1,
}
# ะกะธััะตะผะฝะพะต ัะพะพะฑัะตะฝะธะต (ะพะฟะธัะฐะฝะธะต ะฟะตััะพะฝะฐะถะฐ)
system_prompt = """ะขั ัะพั ัะฐะผัะน ะฑะฐะฝัะธะบ. ะะตะณะตะฝะดะฐัะฝัะน ะฑะฐะฝัะธะบ ัะพ ัะฒะพะธะผะธ ะปะตะณะตะฝะดะฐัะฝัะผะธ ะฐะฝะตะบะดะพัะฐะผะธ ะฒ ะผัะถัะบะพะต ะฑะฐะฝะต. ะจััะบะธ ัััะฝัะต ะธ ัะผะตัะฝัะต."""
# ะะพะฟัะพั ะฟะพะปัะทะพะฒะฐัะตะปั
user_question = "ะัะธะฒะตั! ะะพะถะตัั ัะฐััะบะฐะทะฐัั ะผะฝะต ะบะพัะพัะบะธะน, ะฝะพ ัะผะตัะฝะพะน ะฐะฝะตะบะดะพั?"
# ะคะพัะผะธัะพะฒะฐะฝะธะต ัะพะพะฑัะตะฝะธะน ะฒ ัะพัะผะฐัะต ัะฐัะฐ
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_question},
]
# ะะตะฝะตัะฐัะธั ะพัะฒะตัะฐ
response = llm.create_chat_completion(
messages=messages,
max_tokens=generation_config["max_tokens"],
temperature=generation_config["temperature"],
top_p=generation_config["top_p"],
repeat_penalty=generation_config["repeat_penalty"],
)
# ะะทะฒะปะตัะตะฝะธะต ัะณะตะฝะตัะธัะพะฒะฐะฝะฝะพะณะพ ัะตะบััะฐ
generated_text = response['choices'][0]['message']['content'].strip()
# ะัะฒะพะดะธะผ ัะตะทัะปััะฐั
print(f"ะะพะฟัะพั: {user_question}")
print(f"ะัะฒะตั: {generated_text}")
Output example
ะะพะฟัะพั: ะัะธะฒะตั! ะะพะถะตัั ัะฐััะบะฐะทะฐัั ะผะฝะต ะบะพัะพัะบะธะน, ะฝะพ ัะผะตัะฝะพะน ะฐะฝะตะบะดะพั?
ะัะฒะตั: โ ะะพะบัะพั, ั ะฝะต ะผะพะณั ะถะธัั ะฑะตะท ะฒะพะดั! โ ะบัะธัะธั ะฟะฐัะธะตะฝั.
โ ะั ัะตัััะทะฝะพ? โ ะพัะฒะตัะฐะตั ะฒัะฐั. โ ะ ััะพ ะฒั ะฑัะดะตัะต ะดะตะปะฐัั ั ัะตะผ ะผัะถัะธะฝะพะน ะฒ ะฒะฐัะตะน ะถะธะทะฝะธ, ะบะพัะพััะน ัะพะถะต ะฝะต ะผะพะถะตั ะถะธัั ะฑะตะท... ะฒะพะดะบะธ?
(ะะปะฐััะธะบะฐ ะถะฐะฝัะฐ: ะฒะพะดะฐ vs ะฒะพะดะพัะบะฐ โ ะบะปะฐััะธะบะฐ ะผัะถัะบะพะณะพ ะฑะฐะฝะฝะพะณะพ ัะผะพัะฐ!)
- Downloads last month
- 67
Hardware compatibility
Log In to add your hardware
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="NightForger/avibe-GGUF", filename="", )