File size: 621 Bytes
3371c03
 
 
7853a4d
 
3371c03
 
 
 
 
 
 
 
0e7f536
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
license: mit
library_name: transformers
base_model:
- deepseek-ai/DeepSeek-R1-Zero
---

# DeepSeek-R1-Zero-AWQ 671B

It's a 4-bit AWQ quantization of DeepSeek-R1-Zero 671B model, it's suitable for use with GPU nodes like 8xA100/8xH20/8xH100 with vLLM and SGLang

You can run this model on 8x H100 80GB using vLLM with

`vllm serve adamo1139/DeepSeek-R1-Zero-AWQ --tensor-parallel 8`

Made by DeepSeek with ❤️

<p align="center" style="image-rendering: pixelated;">
  <img width="800" src="https://user-images.githubusercontent.com/55270174/214356078-89430299-247d-4f1f-82f6-a41340df0949.gif" alt="example" />
</p>