Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
solarkyle
/
GLM-4.7-Flash-GGUF
like
1
Text Generation
GGUF
quantized
llama-cpp
llama.cpp
Mixture of Experts
glm4
4-bit precision
Q4_K_M
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
GLM-4.7-Flash-GGUF
18.1 GB
1 contributor
History:
8 commits
solarkyle
Update README.md
4b16fc0
verified
about 9 hours ago
.gitattributes
1.58 kB
Upload GLM-4.7-Flash-Q4_K_M.gguf with huggingface_hub
about 12 hours ago
GLM-4.7-Flash-Q4_K_M.gguf
18.1 GB
xet
Upload GLM-4.7-Flash-Q4_K_M.gguf with huggingface_hub
about 12 hours ago
README.md
4.06 kB
Update README.md
about 9 hours ago