adamo1139's picture
Update README.md
0e7f536 verified
metadata
license: mit
library_name: transformers
base_model:
  - deepseek-ai/DeepSeek-R1-Zero

DeepSeek-R1-Zero-AWQ 671B

It's a 4-bit AWQ quantization of DeepSeek-R1-Zero 671B model, it's suitable for use with GPU nodes like 8xA100/8xH20/8xH100 with vLLM and SGLang

You can run this model on 8x H100 80GB using vLLM with

vllm serve adamo1139/DeepSeek-R1-Zero-AWQ --tensor-parallel 8

Made by DeepSeek with ❤️

example