Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Antigma
/
QwQ-32B-GGUF
like
2
Follow
Antigma Labs
22
GGUF
imatrix
conversational
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
README.md exists but content is empty.
Downloads last month
1
GGUF
Model size
33B params
Architecture
qwen2
Chat template
Hardware compatibility
Log In
to add your hardware
6-bit
Q6_K_L
27.3 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for
Antigma/QwQ-32B-GGUF
Base model
Qwen/Qwen2.5-32B
Finetuned
Qwen/QwQ-32B
Quantized
(
160
)
this model