Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Edge-Quant
/
GLM-4.6V-Flash-Q4_K_M-GGUF
like
0
Image-Text-to-Text
Transformers
GGUF
Chinese
English
llama-cpp
gguf-my-repo
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
GLM-4.6V-Flash-Q4_K_M-GGUF
6.17 GB
1 contributor
History:
3 commits
Edge-Quant
Upload README.md with huggingface_hub
2fc74af
verified
about 1 month ago
.gitattributes
Safe
1.58 kB
Upload glm-4.6v-flash-q4_k_m.gguf with huggingface_hub
about 1 month ago
README.md
1.81 kB
Upload README.md with huggingface_hub
about 1 month ago
glm-4.6v-flash-q4_k_m.gguf
Safe
6.17 GB
xet
Upload glm-4.6v-flash-q4_k_m.gguf with huggingface_hub
about 1 month ago