File size: 5,724 Bytes
8c3cda2
b544b79
 
8c3cda2
 
2fcf31e
 
 
 
b544b79
8c3cda2
 
 
 
4f66265
 
 
 
b544b79
8c3cda2
b544b79
 
 
 
 
8c3cda2
b544b79
8c3cda2
 
 
 
 
 
 
 
 
 
 
4c4b415
8c3cda2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b544b79
 
 
a1f6b26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
library_name: transformers
license: apache-2.0
tags:
- deepseek
- qwen
- qwen2
- transformers
- GGUF
---

# DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant

<div align="center">
  <img src="banner.png" width="80%" alt="NexaQuant" />
</div>

## Background + Overview 

DeepSeek-R1 has been making headlines for rivaling OpenAI’s O1 reasoning model while remaining fully open-source. Many users want to run it locally to ensure data privacy, reduce latency, and maintain offline access. However, fitting such a large model onto personal devices typically requires quantization (e.g. Q4_K_M), which often sacrifices accuracy (up to ~22% accuracy loss) and undermines the benefits of the local reasoning model.

We’ve solved the trade-off by quantizing the DeepSeek R1 Distilled model to one-fourth its original size—without losing any accuracy. This lets you run powerful on-device reasoning wherever you are, with no compromises. Tests on an **HP Omnibook AIPC** with an **AMD Ryzen™ AI 9 HX 370 processor** showed a decoding speed of **66.40 tokens per second** and a peak RAM usage of just **1228 MB** in NexaQuant version—compared to only **25.28 tokens** per second and **3788 MB RAM** in the unquantized version—while **maintaining full precision model accuracy.**

## How to run locally

NexaQuant is compatible with **Nexa-SDK**, **Ollama**, **LM Studio**, **Llama.cpp**, and any llama.cpp based project. Below, we outline multiple ways to run the model locally.

#### Option 1: Using Nexa SDK

**Step 1: Install Nexa SDK**

Follow the installation instructions in Nexa SDK's [GitHub repository](https://github.com/NexaAI/nexa-sdk).

**Step 2: Run the model with Nexa**

Execute the following command in your terminal:
```bash
nexa run DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant:q4_0
```

#### Option 2: Using llama.cpp

**Step 1: Build llama.cpp on Your Device**

Follow the "Building the project" instructions in the llama.cpp [repository](https://github.com/ggerganov/llama.cpp) to build the project.

**Step 2: Run the Model with llama.cpp**

Once built, run `llama-cli` under `<build_dir>/bin/`:
```bash
./llama-cli \
    --model your/local/path/to/DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant \
    --prompt 'Provide step-by-step reasoning enclosed in <think> </think> tags, followed by the final answer enclosed in \boxed{} tags.' \
```

#### Option 3: Using LM Studio

**Step 1: Download and Install LM Studio**

Get the latest version from the [official website](https://lmstudio.ai/).

**Step 2: Load and Run the Model**

1. In LM Studio's top panel, search for and select `NexaAIDev/DeepSeek-R1-Distill-Qwen-1.5B-NexaQuant`.  
2. Click `Download` (if not already downloaded) and wait for the model to load.  
3. Once loaded, go to the chat window and start a conversation.
---

## Example

On the left, we have an example of what LMStudio Q4_K_M responded. On the right is our NexaQuant version. 

Prompt: A Common Investment Banking BrainTeaser Question

There is a 6x8 rectangular chocolate bar made up of small 1x1 bits. We want to break it into the 48 bits. We can break one piece of chocolate horizontally or vertically, but cannot break two pieces together! What is the minimum number of breaks required?

Right Answer: 47

<div align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66abfd6f65beb23afa427d8a/ZS9e66t7OhBIno4eQ3OaX.png" width="80%" alt="Example" />
</div>

## Benchmarks

NexaQuant on Reasoning Benchmarks Compared to BF16 and LMStudio's Q4_K_M

**1.5B:**

<div align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66abfd6f65beb23afa427d8a/Cyh1zVvDHNBT598IkLHkd.png" width="80%" alt="Example" />
</div>

The general capacity has also greatly improved:

**1.5B:**

| Benchmark                  | Full 16-bit | llama.cpp (4-bit) | NexaQuant (4-bit)|
|----------------------------|------------|-------------------|-------------------|
| **HellaSwag**              | 35.81      | 34.31             | 34.60             |
| **MMLU**                   | 37.31      | 35.49             | 37.41             |
| **Humanities**             | 31.86      | 34.87             | 30.97             |
| **Social Sciences**        | 41.50      | 38.17             | 42.09             |
| **STEM**                   | 38.60      | 35.74             | 39.26             |
| **ARC Easy**               | 67.55      | 54.20             | 65.53             |
| **MathQA**                 | 41.04      | 28.51             | 39.87             |
| **PIQA**                   | 65.56      | 61.70             | 65.07             |
| **IFEval - Inst - Loose**  | 25.06      | 24.77             | 28.54             |
| **IFEval - Inst - Strict** | 23.62      | 22.94             | 27.94             |
| **IFEval - Prom - Loose**  | 13.86      | 10.29             | 15.71             |
| **IFEval - Prom - Strict** | 12.57      | 8.09              | 15.16             |


## What's next

1. Inference Nexa Quantized Deepseek-R1 distilled model on NPU.

2. This model is designed for complex problem-solving, which is why it has a longer thinking process. We understand this can be an issue in some cases, and we're actively working on improvements.

### Follow us

If you liked our work, feel free to ⭐Star [Nexa's GitHub Repo](https://github.com/NexaAI/nexa-sdk).

Interested in running DeepSeek R1 on your own devices with optimized CPU, GPU, and NPU acceleration or compressing your finetuned DeepSeek-Distill-R1? [Let’s chat!](https://nexa.ai/book-a-call)

[Blogs](https://nexa.ai/blogs/quantized-deepseek-r1-on-device) | [Discord](https://discord.gg/nexa-ai) | [X(Twitter)](https://x.com/nexa_ai)