Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantTrio
/
GLM-4.1V-9B-Thinking-GPTQ-Int4-Int8Mix
like
1
Follow
QuantTrio
145
Text Generation
Safetensors
glm4v
GPTQ
Int4-Int8Mix
vLLM
conversational
4-bit precision
gptq
arxiv:
2507.01006
License:
mit
Model card
Files
Files and versions
xet
Community
Use this model
main
GLM-4.1V-9B-Thinking-GPTQ-Int4-Int8Mix
9.5 GB
1 contributor
History:
5 commits
JunHowie
Delete .mv
a869799
verified
3 months ago
.gitattributes
Safe
1.57 kB
Upload folder using huggingface_hub
5 months ago
README.md
Safe
3.84 kB
Upload folder using huggingface_hub
5 months ago
chat_template.jinja
Safe
922 Bytes
Upload folder using huggingface_hub
5 months ago
config.json
Safe
1.97 kB
Upload folder using huggingface_hub
5 months ago
model-00001-of-00002.safetensors
Safe
5 GB
xet
Upload folder using huggingface_hub
5 months ago
model-00002-of-00002.safetensors
Safe
4.48 GB
xet
Upload folder using huggingface_hub
5 months ago
model.safetensors.index.json
Safe
168 kB
Upload folder using huggingface_hub
5 months ago
preprocessor_config.json
Safe
364 Bytes
Upload folder using huggingface_hub
5 months ago
requirements.txt
Safe
247 Bytes
Upload folder using huggingface_hub
5 months ago
tokenizer.json
Safe
20 MB
xet
Upload folder using huggingface_hub
5 months ago
tokenizer_config.json
Safe
4.8 kB
Upload folder using huggingface_hub
5 months ago
video_preprocessor_config.json
Safe
365 Bytes
Upload folder using huggingface_hub
5 months ago