Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
solarkyle
/
GLM-4.7-Flash-GGUF
like
1
Text Generation
GGUF
quantized
llama-cpp
llama.cpp
Mixture of Experts
glm4
4-bit precision
Q4_K_M
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
GLM-4.7-Flash-GGUF
/
README.md
Commit History
Update README.md
4b16fc0
verified
solarkyle
commited on
about 21 hours ago
Upload README.md with huggingface_hub
302a97b
verified
solarkyle
commited on
1 day ago
Upload README.md with huggingface_hub
78f3074
verified
solarkyle
commited on
1 day ago
Upload README.md with huggingface_hub
2935593
verified
solarkyle
commited on
1 day ago
Upload README.md with huggingface_hub
edc3894
verified
solarkyle
commited on
1 day ago
Upload README.md with huggingface_hub
b6bd600
verified
solarkyle
commited on
1 day ago