llm.create_chat_completion(
messages = [
{
"role": "user",
"content": "What is the capital of France?"
}
]
)My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k.
Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.
Updated on: Mon May 12, 11:17:22
- Downloads last month
- 15
Hardware compatibility
Log In to add your hardware
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ZeroWw/Seed-Coder-8B-Instruct-GGUF", filename="", )