Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
solarkyle
/
GLM-4.7-Flash-GGUF
like
3
Text Generation
GGUF
quantized
llama-cpp
llama.cpp
Mixture of Experts
glm4
4-bit precision
Q4_K_M
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
GLM-4.7-Flash-GGUF
Commit History
Update README.md
4b16fc0
verified
solarkyle
commited on
Jan 20
Upload GLM-4.7-Flash-Q4_K_M.gguf with huggingface_hub
2ab4cc5
verified
solarkyle
commited on
Jan 19
Upload README.md with huggingface_hub
302a97b
verified
solarkyle
commited on
Jan 19
Upload README.md with huggingface_hub
78f3074
verified
solarkyle
commited on
Jan 19
Upload README.md with huggingface_hub
2935593
verified
solarkyle
commited on
Jan 19
Upload README.md with huggingface_hub
edc3894
verified
solarkyle
commited on
Jan 19
Upload README.md with huggingface_hub
b6bd600
verified
solarkyle
commited on
Jan 19
initial commit
d466a31
verified
solarkyle
commited on
Jan 19