Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
solarkyle
/
GLM-4.7-Flash-GGUF
like
4
Text Generation
GGUF
quantized
llama-cpp
llama.cpp
Mixture of Experts
glm4
4-bit precision
Q4_K_M
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
302a97b
GLM-4.7-Flash-GGUF
5.59 kB
Ctrl+K
Ctrl+K
1 contributor
History:
6 commits
solarkyle
Upload README.md with huggingface_hub
302a97b
verified
3 months ago
.gitattributes
Safe
1.52 kB
initial commit
3 months ago
README.md
4.07 kB
Upload README.md with huggingface_hub
3 months ago