output = llm(
"Once upon a time,",
max_tokens=512,
echo=True
)
print(output)My own (ZeroWw) quantizations. output and embed tensors quantized to f16. all other tensors quantized to q5_k or q6_k.
Result: both f16.q6 and f16.q5 are smaller than q8_0 standard quantization and they perform as well as the pure f16.
Updated on: Sat Aug 09, 15:48:57
- Downloads last month
- 13
Hardware compatibility
Log In to add your hardware
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ZeroWw/OpenELM-3B-Instruct-GGUF", filename="", )