Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
inference-optimization
/
GLM-4.6-quantized.w4a16
like
0
Follow
Inference Optimization
14
Safetensors
glm4_moe
compressed-tensors
License:
mit
Model card
Files
Files and versions
xet
Community
main
GLM-4.6-quantized.w4a16
/
generation_config.json
Commit History
Add model files and gitattributes
c0f0128
Shubhra Pandit
commited on
Dec 12, 2025