Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

ubergarm
/
GLM-4.7-Flash-GGUF

Text Generation
GGUF
imatrix
conversational
ik_llama.cpp
glm4_moe_lite
Model card Files Files and versions
xet
Community
6
GLM-4.7-Flash-GGUF
55.9 GB
  • 1 contributor
History: 24 commits
ubergarm's picture
ubergarm
update perplexity graphs with gating function fix
83139c1 about 1 month ago
  • images
    update perplexity graphs with gating function fix about 1 month ago
  • logs
    update perplexity graphs with gating function fix about 1 month ago
  • .gitattributes
    1.65 kB
    testing quant about 2 months ago
  • GLM-4.7-Flash-IQ5_K.gguf
    22.7 GB
    xet
    Upload GLM-4.7-Flash-IQ5_K.gguf with huggingface_hub about 1 month ago
  • GLM-4.7-Flash-MXFP4.gguf
    17.1 GB
    xet
    Upload GLM-4.7-Flash-MXFP4.gguf with huggingface_hub about 1 month ago
  • GLM-4.7-Flash-smol-IQ4_KSS.gguf
    16 GB
    xet
    Upload GLM-4.7-Flash-smol-IQ4_KSS.gguf with huggingface_hub about 1 month ago
  • README.md
    13 kB
    update perplexity graphs with gating function fix about 1 month ago
  • imatrix-GLM-4.7-Flash-BF16.dat
    72.3 MB
    xet
    Upload imatrix-GLM-4.7-Flash-BF16.dat with huggingface_hub about 1 month ago