Instructions to use QuantTrio/DeepSeek-V3.2-AWQ with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantTrio/DeepSeek-V3.2-AWQ with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="QuantTrio/DeepSeek-V3.2-AWQ") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("QuantTrio/DeepSeek-V3.2-AWQ", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use QuantTrio/DeepSeek-V3.2-AWQ with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantTrio/DeepSeek-V3.2-AWQ" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantTrio/DeepSeek-V3.2-AWQ", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/QuantTrio/DeepSeek-V3.2-AWQ
- SGLang
How to use QuantTrio/DeepSeek-V3.2-AWQ with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "QuantTrio/DeepSeek-V3.2-AWQ" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantTrio/DeepSeek-V3.2-AWQ", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "QuantTrio/DeepSeek-V3.2-AWQ" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantTrio/DeepSeek-V3.2-AWQ", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use QuantTrio/DeepSeek-V3.2-AWQ with Docker Model Runner:
docker model run hf.co/QuantTrio/DeepSeek-V3.2-AWQ
can this model be Quantized?
Your AWQ quantization work is absolutely amazing! It would be incredible if you could work your magic on this model too (cerebras/DeepSeek-V3.2-REAP-345B-A37B) - an AWQ quant of this would be a game-changer for those of us running on limited VRAM!
sure, let me have a look
I tried making an AWQ quant for cerebras/DeepSeek-V3.2-REAP-345B-A37B, but the quality drop was severe — it’s very hard to preserve even basic response quality.
Since this is a heavily pruned MoE (~50% experts removed), I guess it may need a bit of continued training to re-stabilize the model before quantization is viable.
I tried making an AWQ quant for cerebras/DeepSeek-V3.2-REAP-345B-A37B, but the quality drop was severe — it’s very hard to preserve even basic response quality.
Since this is a heavily pruned MoE (~50% experts removed), I guess it may need a bit of continued training to re-stabilize the model before quantization is viable.
Thanks a lot for all your quantization work! Totally get how tough this is. Maybe newer open‑sourced models like GLM‑4.7 or the upcoming Minimax‑M2.1 could be better targets. Thank you for contribution, quantization work is super valuable to the community!