|
|
--- |
|
|
license: mit |
|
|
library_name: transformers |
|
|
base_model: |
|
|
- deepseek-ai/DeepSeek-R1-Zero |
|
|
--- |
|
|
|
|
|
# DeepSeek-R1-Zero-AWQ 671B |
|
|
|
|
|
It's a 4-bit AWQ quantization of DeepSeek-R1-Zero 671B model, it's suitable for use with GPU nodes like 8xA100/8xH20/8xH100 with vLLM and SGLang |
|
|
|
|
|
You can run this model on 8x H100 80GB using vLLM with |
|
|
|
|
|
`vllm serve adamo1139/DeepSeek-R1-Zero-AWQ --tensor-parallel 8` |
|
|
|
|
|
Made by DeepSeek with ❤️ |
|
|
|
|
|
<p align="center" style="image-rendering: pixelated;"> |
|
|
<img width="800" src="https://user-images.githubusercontent.com/55270174/214356078-89430299-247d-4f1f-82f6-a41340df0949.gif" alt="example" /> |
|
|
</p> |