This model was converted and quantized using the official tools provided by llama.cpp.

Serving:

./build/bin/llama-server -m /models/AutoGLM-Phone-9B-Q4_K_M.gguf -ngl 999
Downloads last month
48
GGUF
Model size
9B params
Architecture
glm4
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for yeahdongcn/AutoGLM-Phone-9B-Q4_K_M-GGUF

Quantized
(8)
this model