theprint commited on
Commit
3450eea
·
verified ·
1 Parent(s): 6d4f934

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -10
README.md CHANGED
@@ -29,6 +29,20 @@ This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct using the Unsloth
29
  - **Base model:** Qwen/Qwen2.5-7B-Instruct
30
  - **Fine-tuning method:** LoRA with rank 128
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  ## Intended Use
33
 
34
  Conversation, brainstorming, and general instruction following
@@ -99,16 +113,6 @@ outputs = model.generate(inputs, max_new_tokens=256, temperature=0.7, do_sample=
99
  response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True)
100
  print(response)
101
  ```
102
- ## GGUF Quantized Versions
103
-
104
- Quantized GGUF versions are available in the `gguf/` directory for use with llama.cpp:
105
-
106
- - `Tom-Qwen-7B-Instruct-f16.gguf` (14531.9 MB) - 16-bit float (original precision, largest file)
107
- - `Tom-Qwen-7B-Instruct-q3_k_m.gguf` (3632.0 MB) - 3-bit quantization (medium quality)
108
- - `Tom-Qwen-7B-Instruct-q4_k_m.gguf` (4466.1 MB) - 4-bit quantization (medium, recommended for most use cases)
109
- - `Tom-Qwen-7B-Instruct-q5_k_m.gguf` (5192.6 MB) - 5-bit quantization (medium, good quality)
110
- - `Tom-Qwen-7B-Instruct-q6_k.gguf` (5964.5 MB) - 6-bit quantization (high quality)
111
- - `Tom-Qwen-7B-Instruct-q8_0.gguf` (7723.4 MB) - 8-bit quantization (very high quality)
112
 
113
  ### Using with llama.cpp
114
 
 
29
  - **Base model:** Qwen/Qwen2.5-7B-Instruct
30
  - **Fine-tuning method:** LoRA with rank 128
31
 
32
+ # GGUF Quantized Versions
33
+
34
+ You can find quantized gguf versions of this model here: [theprint/Tom-Qwen-7B-Instruct/tree/main/gguf](https://huggingface.co/theprint/Tom-Qwen-7B-Instruct/tree/main/gguf)
35
+
36
+ Quantized GGUF versions are in the `gguf/` directory for use with llama.cpp:
37
+
38
+ - `Tom-Qwen-7B-Instruct-f16.gguf` (14531.9 MB) - 16-bit float (original precision, largest file)
39
+ - `Tom-Qwen-7B-Instruct-q3_k_m.gguf` (3632.0 MB) - 3-bit quantization (medium quality)
40
+ - `Tom-Qwen-7B-Instruct-q4_k_m.gguf` (4466.1 MB) - 4-bit quantization (medium, recommended for most use cases)
41
+ - `Tom-Qwen-7B-Instruct-q5_k_m.gguf` (5192.6 MB) - 5-bit quantization (medium, good quality)
42
+ - `Tom-Qwen-7B-Instruct-q6_k.gguf` (5964.5 MB) - 6-bit quantization (high quality)
43
+ - `Tom-Qwen-7B-Instruct-q8_0.gguf` (7723.4 MB) - 8-bit quantization (very high quality)
44
+
45
+
46
  ## Intended Use
47
 
48
  Conversation, brainstorming, and general instruction following
 
113
  response = tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True)
114
  print(response)
115
  ```
 
 
 
 
 
 
 
 
 
 
116
 
117
  ### Using with llama.cpp
118