Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
QuantTrio
/
MiniMax-M2.5-AWQ
like
8
Follow
QuantTrio
198
Text Generation
Transformers
Safetensors
minimax_m2
vLLM
AWQ
conversational
custom_code
4-bit precision
awq
License:
modified-mit
Model card
Files
Files and versions
xet
Community
2
Deploy
Use this model
Qwen3.5 AWQ 4 Bit
#1
by
yuchenxie
- opened
about 21 hours ago
Discussion
yuchenxie
about 21 hours ago
Loved QuantTrio's AWQ weights for GLM, please do the same for Qwen3.5
See translation
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Comment
·
Sign up
or
log in
to comment