soumyadarshandash hellork commited on
Commit
098852a
·
0 Parent(s):

Duplicate from hellork/DeepSeek-R1-Distill-Qwen-7B-IQ3_XXS-GGUF

Browse files

Co-authored-by: Henry Kroll III <hellork@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ deepseek-r1-distill-qwen-7b-iq3_xxs-imat.gguf filter=lfs diff=lfs merge=lfs -text
37
+ imatrix.dat filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: unsloth/DeepSeek-R1-Distill-Qwen-7B
3
+ language:
4
+ - en
5
+ license: apache-2.0
6
+ library_name: transformers
7
+ tags:
8
+ - deepseek
9
+ - qwen
10
+ - qwen2
11
+ - unsloth
12
+ - transformers
13
+ - llama-cpp
14
+ - gguf-my-repo
15
+ ---
16
+
17
+ # TESTING...TESTING! The quantization used on this model may reduce quality, but it is hopefully faster, and maybe usable with 4GB VRAM. TESTING...
18
+
19
+ So far so good! We were able to use all 29 layers with `-ngl 29` and it reserves less than 3.5GiB of VRAM with `-c 2048` context window. Quite usable.
20
+ Use the `llama-server` and navigate to the web interface at http://127.0.0.1:8080 for best results. Happy AI.
21
+
22
+ # hellork/DeepSeek-R1-Distill-Qwen-7B-IQ3_XXS-GGUF
23
+ This model was converted to GGUF format from [`unsloth/DeepSeek-R1-Distill-Qwen-7B`](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
24
+ Refer to the [original model card](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-7B) for more details on the model.
25
+
26
+ # Usage Recommendations
27
+
28
+ We recommend adhering to the following configurations when utilizing the DeepSeek-R1 series models, including benchmarking, to achieve the expected performance:
29
+
30
+ Set the temperature within the range of 0.5-0.7 (0.6 is recommended) to prevent endless repetitions or incoherent outputs.
31
+ Avoid adding a system prompt; all instructions should be contained within the user prompt.
32
+ For mathematical problems, it is advisable to include a directive in your prompt such as: "Please reason step by step, and put your final answer within \boxed{}."
33
+ When evaluating model performance, it is recommended to conduct multiple tests and average the results.
34
+
35
+ ## Use with llama.cpp
36
+ Install llama.cpp through brew (works on Mac and Linux)
37
+
38
+ ```bash
39
+ brew install llama.cpp
40
+
41
+ ```
42
+
43
+ # Or compile it to take advantage of `Nvidia CUDA` hardware:
44
+
45
+ ```bash
46
+ git clone https://github.com/ggerganov/llama.cpp.git
47
+ cd llama*
48
+ # look at docs for other hardware builds or to make sure none of this has changed.
49
+
50
+ cmake -B build -DGGML_CUDA=ON
51
+ CMAKE_ARGS="-DGGML_CUDA=on" cmake --build build --config Release # -j6 (optional: use a number less than the number of cores)
52
+
53
+ # If your version of gcc is > 12 and it gives errors, use conda to install gcc-12 and activate it.
54
+ # Run the above cmake commands again.
55
+ # Then run conda deactivate and re-run the last line once more to link the build outside of conda.
56
+
57
+ # Add the -ngl 33 flag to the commands below to take advantage of all the GPU layers.
58
+ # If that uses too much GPU and crashes, use some lower number.
59
+ ```
60
+
61
+ Invoke the llama.cpp server or the CLI.
62
+
63
+ ### CLI:
64
+ ```bash
65
+ llama-cli --hf-repo hellork/DeepSeek-R1-Distill-Qwen-7B-IQ3_XXS-GGUF --hf-file deepseek-r1-distill-qwen-7b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
66
+ ```
67
+
68
+ ### Server:
69
+ ```bash
70
+ llama-server --hf-repo hellork/DeepSeek-R1-Distill-Qwen-7B-IQ3_XXS-GGUF --hf-file deepseek-r1-distill-qwen-7b-iq3_xxs-imat.gguf -c 2048
71
+ ```
72
+
73
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
74
+
75
+ Step 1: Clone llama.cpp from GitHub.
76
+ ```
77
+ git clone https://github.com/ggerganov/llama.cpp
78
+ ```
79
+
80
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
81
+ ```
82
+ cd llama.cpp && LLAMA_CURL=1 make
83
+ ```
84
+
85
+ Step 3: Run inference through the main binary.
86
+ ```
87
+ ./llama-cli --hf-repo hellork/DeepSeek-R1-Distill-Qwen-7B-IQ3_XXS-GGUF --hf-file deepseek-r1-distill-qwen-7b-iq3_xxs-imat.gguf -p "The meaning to life and the universe is"
88
+ ```
89
+ or
90
+ ```
91
+ ./llama-server --hf-repo hellork/DeepSeek-R1-Distill-Qwen-7B-IQ3_XXS-GGUF --hf-file deepseek-r1-distill-qwen-7b-iq3_xxs-imat.gguf -c 2048
92
+ ```
deepseek-r1-distill-qwen-7b-iq3_xxs-imat.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0bf6d19a87d9605bfaa02592b5e7fc28267ef2efd5467cd36c44838632b245d
3
+ size 3114514880
imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cdab1bc463af99cdd55967d1f1704d7cab6bc00586c9ed2950412afe1e694fa9
3
+ size 4536669