| license: mit | |
| language: | |
| - en | |
| - zh | |
| base_model: | |
| - deepseek-ai/DeepSeek-R1 | |
| pipeline_tag: text-generation | |
| library_name: transformers | |
| # DeepSeek R1 AWQ | |
| AWQ of DeepSeek R1. | |
| This quant modified some of the model code to fix an overflow issue when using float16. | |
| ## Serving with vLLM | |
| To serve using vLLM with 8x 80GB GPUs, use the following command: | |
| ```sh | |
| VLLM_WORKER_MULTIPROC_METHOD=spawn python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 12345 --max-model-len 65536 --max-num-batched-tokens 65536 --trust-remote-code --tensor-parallel-size 8 --gpu-memory-utilization 0.97 --dtype float16 --served-model-name deepseek-reasoner --model cognitivecomputations/DeepSeek-R1-AWQ | |
| ``` | |
| You can download the wheel I built for PyTorch 2.6, Python 3.12 by clicking [here](https://huggingface.co/x2ray/wheels/resolve/main/vllm-0.7.3.dev187%2Bg0ff1a4df.d20220101.cu126-cp312-cp312-linux_x86_64.whl). | |
| Inference speed with batch size 1 and short prompt: | |
| - 8x H100: 48 TPS | |
| - 8x A100: 38 TPS | |
| Note: | |
| - Inference speed will be better than FP8 at low batch size but worse than FP8 at high batch size, this is the nature of low bit quantization. | |
| - vLLM supports MLA for AWQ now, you can run this model with full context length on just 8x 80GB GPUs. | |
| ## Serving with SGLang | |
| ```sh | |
| python3 -m sglang.launch_server --model cognitivecomputations/DeepSeek-R1-AWQ --tp 8 --trust-remote-code --dtype half | |
| ``` | |
| Note: | |
| - AWQ does not support BF16, so add the `--dtype half` flag if AWQ is used for quantization. | |
| - For more information about running DeepSeek-R1 using SGLang, feel free to check out their [documentation](https://docs.sglang.ai/references/deepseek.html). | |