Update README.md
Browse files
README.md
CHANGED
|
@@ -11,4 +11,10 @@ It's a 4-bit AWQ quantization of DeepSeek-R1-Zero 671B model, it's suitable for
|
|
| 11 |
|
| 12 |
You can run this model on 8x H100 80GB using vLLM with
|
| 13 |
|
| 14 |
-
`vllm serve adamo1139/DeepSeek-R1-Zero-AWQ --tensor-parallel 8`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
You can run this model on 8x H100 80GB using vLLM with
|
| 13 |
|
| 14 |
+
`vllm serve adamo1139/DeepSeek-R1-Zero-AWQ --tensor-parallel 8`
|
| 15 |
+
|
| 16 |
+
Made by DeepSeek with ❤️
|
| 17 |
+
|
| 18 |
+
<p align="center" style="image-rendering: pixelated;">
|
| 19 |
+
<img width="800" src="https://user-images.githubusercontent.com/55270174/214356078-89430299-247d-4f1f-82f6-a41340df0949.gif" alt="example" />
|
| 20 |
+
</p>
|