How to use from the
Use from the
llama-cpp-python library
# !pip install llama-cpp-python

from llama_cpp import Llama

llm = Llama.from_pretrained(
	repo_id="ggml-org/AutoGLM-Phone-9B-GGUF",
	filename="",
)
llm.create_chat_completion(
	messages = "No input example has been defined for this model task."
)

AutoGLM-Phone-9B-GGUF

This model is converted from zai-org/AutoGLM-Phone-9B to GGUF using convert_hf_to_gguf.py

To use it:

llama-server -hf ggml-org/AutoGLM-Phone-9B-GGUF
Downloads last month
2,745
GGUF
Model size
9B params
Architecture
glm4
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ggml-org/AutoGLM-Phone-9B-GGUF

Quantized
(9)
this model

Collection including ggml-org/AutoGLM-Phone-9B-GGUF