Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

baa-ai
/
GLM-4.7-Flash-RAM-20GB-MLX

MLX
Safetensors
glm4_moe_lite
quantized
mixed-precision
glm
Mixture of Experts
4-bit precision
Model card Files Files and versions
xet
Community
GLM-4.7-Flash-RAM-20GB-MLX
19.3 GB
Ctrl+K
Ctrl+K
  • 1 contributor
History: 7 commits
tomkay's picture
tomkay
Fix YAML metadata formatting
c742d1d verified 1 day ago
  • .gitattributes
    1.57 kB
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • README.md
    2.38 kB
    Fix YAML metadata formatting 1 day ago
  • chat_template.jinja
    3.12 kB
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • config.json
    66 kB
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • generation_config.json
    181 Bytes
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • model-00001-of-00004.safetensors
    5.37 GB
    xet
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • model-00002-of-00004.safetensors
    5.31 GB
    xet
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • model-00003-of-00004.safetensors
    5.3 GB
    xet
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • model-00004-of-00004.safetensors
    3.3 GB
    xet
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • model.safetensors.index.json
    130 kB
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • tokenizer.json
    20.2 MB
    xet
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago
  • tokenizer_config.json
    335 Bytes
    RAM 20GB mixed-precision quantization of GLM-4.7-Flash 11 days ago