🌴 Utu β€” Melayu Maluku Utara (GGUF)

Versi GGUF dari haidar038/utu-malut.
Optimal untuk inferensi CPU menggunakan llama.cpp, Ollama, atau LM Studio.

Pilihan File

File Ukuran Keterangan
*q4_k_m*.gguf ~4.7 GB βœ… Rekomendasi utama β€” balance quality/size
*q5_k_m*.gguf ~5.7 GB Kualitas lebih tinggi
*q8_0*.gguf ~8.5 GB Near-lossless

Cara Penggunaan

llama.cpp

./llama-cli -m model-q4_k_m.gguf --chat-template llama3 -i

Ollama

ollama run hf.co/haidar038/utu-malut-GGUF:Q4_K_M

Python

from llama_cpp import Llama
llm = Llama.from_pretrained(
    repo_id  = "haidar038/utu-malut-GGUF",
    filename = "*q4_k_m*.gguf",
    n_ctx    = 512,
)
out = llm.create_chat_completion(messages=[
    {"role": "system", "content": "Ngana adalah Utu, asisten AI dari Ternate."},
    {"role": "user",   "content": "Ngana mau pigi mana?"},
])
print(out["choices"][0]["message"]["content"])
Downloads last month
145
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for haidar038/utu-malut-GGUF

Quantized
(602)
this model

Space using haidar038/utu-malut-GGUF 1