Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
soundsgoodai
/
GLM-4.7-NVFP4-KV-cache-FP8
like
1
Follow
SoundsGoodAI
1
Text Generation
Safetensors
glm4_moe
conversational
8-bit precision
modelopt
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Model Description
Model Description
A quantization setup used for GLM-4.7:
Weights: NVFP4
KV cache: FP8
Tooling: NVIDIA/Model-Optimizer
Deploy with TensorRT-LLM
Downloads last month
7,034
Safetensors
Model size
177B params
Tensor type
BF16
路
F32
路
F8_E4M3
路
U8
路
Chat template
Files info
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for
soundsgoodai/GLM-4.7-NVFP4-KV-cache-FP8
Base model
zai-org/GLM-4.7
Quantized
(
41
)
this model