Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
mudler
/
GLM-4.7-Flash-APEX-GGUF
like
4
GGUF
quantized
apex
Mixture of Experts
mixture-of-experts
glm
mla
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
GLM-4.7-Flash-APEX-GGUF
124 GB
Ctrl+K
Ctrl+K
1 contributor
History:
9 commits
mudler
Upload README.md with huggingface_hub
3aa7bd9
verified
1 day ago
.gitattributes
2 kB
Upload GLM-4.7-Flash-APEX-Quality.gguf with huggingface_hub
1 day ago
GLM-4.7-Flash-APEX-Balanced.gguf
22.1 GB
xet
Upload GLM-4.7-Flash-APEX-Balanced.gguf with huggingface_hub
1 day ago
GLM-4.7-Flash-APEX-Compact.gguf
14.6 GB
xet
Upload GLM-4.7-Flash-APEX-Compact.gguf with huggingface_hub
1 day ago
GLM-4.7-Flash-APEX-I-Balanced.gguf
22.1 GB
xet
Upload GLM-4.7-Flash-APEX-I-Balanced.gguf with huggingface_hub
1 day ago
GLM-4.7-Flash-APEX-I-Compact.gguf
14.6 GB
xet
Upload GLM-4.7-Flash-APEX-I-Compact.gguf with huggingface_hub
1 day ago
GLM-4.7-Flash-APEX-I-Mini.gguf
11.9 GB
xet
Upload GLM-4.7-Flash-APEX-I-Mini.gguf with huggingface_hub
1 day ago
GLM-4.7-Flash-APEX-I-Quality.gguf
19.2 GB
xet
Upload GLM-4.7-Flash-APEX-I-Quality.gguf with huggingface_hub
1 day ago
GLM-4.7-Flash-APEX-Quality.gguf
19.2 GB
xet
Upload GLM-4.7-Flash-APEX-Quality.gguf with huggingface_hub
1 day ago
README.md
2.6 kB
Upload README.md with huggingface_hub
1 day ago