Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,18 @@ license: mit
|
|
| 12 |
This repository hosts the optimized versions of [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B/) and [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B/) to accelerate inference with ONNX Runtime.
|
| 13 |
Optimized models are published here in [ONNX](https://onnx.ai) format to run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets.
|
| 14 |
|
| 15 |
-
To easily get started with the model, you can use our newly introduced ONNX Runtime Generate() API.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
## ONNX Models
|
| 18 |
Here are some of the optimized configurations we have added:
|
|
|
|
| 12 |
This repository hosts the optimized versions of [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B/) and [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B/) to accelerate inference with ONNX Runtime.
|
| 13 |
Optimized models are published here in [ONNX](https://onnx.ai) format to run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets.
|
| 14 |
|
| 15 |
+
To easily get started with the model, you can use our newly introduced ONNX Runtime Generate() API.
|
| 16 |
+
|
| 17 |
+
```bash
|
| 18 |
+
# Download the model directly using the huggingface cli
|
| 19 |
+
huggingface-cli download onnxruntime/DeepSeek-R1-Distill-ONNX --include 'deepseek-r1-distill-qwen-1.5B/*' --local-dir .
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
```bash
|
| 23 |
+
# CPU Chat inference. If you pulled the model from huggingface, adjust the model directory (-m) accordingly
|
| 24 |
+
curl -o https://raw.githubusercontent.com/microsoft/onnxruntime-genai/refs/heads/main/examples/python/model-chat.py
|
| 25 |
+
python model-chat.py -m deepseek-r1-distill-qwen-1.5B/model -e cpu --chat_template "<|begin▁of▁sentence|><|User|>{input}<|Assistant|>"
|
| 26 |
+
```
|
| 27 |
|
| 28 |
## ONNX Models
|
| 29 |
Here are some of the optimized configurations we have added:
|