Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +134 -122
README.md CHANGED
@@ -1,122 +1,134 @@
1
- ---
2
- license: other
3
- license_name: qwen-research
4
- license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
5
- language:
6
- - en
7
- pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-3B-Instruct
9
- tags:
10
- - chat
11
- - TensorBlock
12
- - GGUF
13
- library_name: transformers
14
- ---
15
-
16
- <div style="width: auto; margin-left: auto; margin-right: auto">
17
- <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
18
- </div>
19
- <div style="display: flex; justify-content: space-between; width: 100%;">
20
- <div style="display: flex; flex-direction: column; align-items: flex-start;">
21
- <p style="margin-top: 0.5em; margin-bottom: 0em;">
22
- Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
23
- </p>
24
- </div>
25
- </div>
26
-
27
- ## Qwen/Qwen2.5-3B-Instruct - GGUF
28
-
29
- This repo contains GGUF format model files for [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
30
-
31
- The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit ec7f3ac](https://github.com/ggerganov/llama.cpp/commit/ec7f3ac9ab33e46b136eb5ab6a76c4d81f57c7f1).
32
-
33
- ## Our projects
34
- <table border="1" cellspacing="0" cellpadding="10">
35
- <tr>
36
- <th style="font-size: 25px;">Awesome MCP Servers</th>
37
- <th style="font-size: 25px;">TensorBlock Studio</th>
38
- </tr>
39
- <tr>
40
- <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
41
- <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
42
- </tr>
43
- <tr>
44
- <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
45
- <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
46
- </tr>
47
- <tr>
48
- <th>
49
- <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
50
- display: inline-block;
51
- padding: 8px 16px;
52
- background-color: #FF7F50;
53
- color: white;
54
- text-decoration: none;
55
- border-radius: 6px;
56
- font-weight: bold;
57
- font-family: sans-serif;
58
- ">👀 See what we built 👀</a>
59
- </th>
60
- <th>
61
- <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
62
- display: inline-block;
63
- padding: 8px 16px;
64
- background-color: #FF7F50;
65
- color: white;
66
- text-decoration: none;
67
- border-radius: 6px;
68
- font-weight: bold;
69
- font-family: sans-serif;
70
- ">👀 See what we built 👀</a>
71
- </th>
72
- </tr>
73
- </table>
74
- ## Prompt template
75
-
76
- ```
77
- <|im_start|>system
78
- {system_prompt}<|im_end|>
79
- <|im_start|>user
80
- {prompt}<|im_end|>
81
- <|im_start|>assistant
82
- ```
83
-
84
- ## Model file specification
85
-
86
- | Filename | Quant type | File Size | Description |
87
- | -------- | ---------- | --------- | ----------- |
88
- | [Qwen2.5-3B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q2_K.gguf) | Q2_K | 1.275 GB | smallest, significant quality loss - not recommended for most purposes |
89
- | [Qwen2.5-3B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_S.gguf) | Q3_K_S | 1.454 GB | very small, high quality loss |
90
- | [Qwen2.5-3B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_M.gguf) | Q3_K_M | 1.590 GB | very small, high quality loss |
91
- | [Qwen2.5-3B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_L.gguf) | Q3_K_L | 1.707 GB | small, substantial quality loss |
92
- | [Qwen2.5-3B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q4_0.gguf) | Q4_0 | 1.823 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
93
- | [Qwen2.5-3B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q4_K_S.gguf) | Q4_K_S | 1.834 GB | small, greater quality loss |
94
- | [Qwen2.5-3B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q4_K_M.gguf) | Q4_K_M | 1.930 GB | medium, balanced quality - recommended |
95
- | [Qwen2.5-3B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q5_0.gguf) | Q5_0 | 2.170 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
96
- | [Qwen2.5-3B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q5_K_S.gguf) | Q5_K_S | 2.170 GB | large, low quality loss - recommended |
97
- | [Qwen2.5-3B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q5_K_M.gguf) | Q5_K_M | 2.225 GB | large, very low quality loss - recommended |
98
- | [Qwen2.5-3B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q6_K.gguf) | Q6_K | 2.538 GB | very large, extremely low quality loss |
99
- | [Qwen2.5-3B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q8_0.gguf) | Q8_0 | 3.285 GB | very large, extremely low quality loss - not recommended |
100
-
101
-
102
- ## Downloading instruction
103
-
104
- ### Command line
105
-
106
- Firstly, install Huggingface Client
107
-
108
- ```shell
109
- pip install -U "huggingface_hub[cli]"
110
- ```
111
-
112
- Then, downoad the individual model file the a local directory
113
-
114
- ```shell
115
- huggingface-cli download tensorblock/Qwen2.5-3B-Instruct-GGUF --include "Qwen2.5-3B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
116
- ```
117
-
118
- If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
119
-
120
- ```shell
121
- huggingface-cli download tensorblock/Qwen2.5-3B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
122
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen-research
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-3B-Instruct/blob/main/LICENSE
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ pipeline_tag: text-generation
20
+ base_model: Qwen/Qwen2.5-3B-Instruct
21
+ tags:
22
+ - chat
23
+ - TensorBlock
24
+ - GGUF
25
+ library_name: transformers
26
+ ---
27
+
28
+ <div style="width: auto; margin-left: auto; margin-right: auto">
29
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
30
+ </div>
31
+ <div style="display: flex; justify-content: space-between; width: 100%;">
32
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
33
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
34
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
35
+ </p>
36
+ </div>
37
+ </div>
38
+
39
+ ## Qwen/Qwen2.5-3B-Instruct - GGUF
40
+
41
+ This repo contains GGUF format model files for [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
42
+
43
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit ec7f3ac](https://github.com/ggerganov/llama.cpp/commit/ec7f3ac9ab33e46b136eb5ab6a76c4d81f57c7f1).
44
+
45
+ ## Our projects
46
+ <table border="1" cellspacing="0" cellpadding="10">
47
+ <tr>
48
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
49
+ <th style="font-size: 25px;">TensorBlock Studio</th>
50
+ </tr>
51
+ <tr>
52
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
53
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
54
+ </tr>
55
+ <tr>
56
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
57
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
58
+ </tr>
59
+ <tr>
60
+ <th>
61
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
62
+ display: inline-block;
63
+ padding: 8px 16px;
64
+ background-color: #FF7F50;
65
+ color: white;
66
+ text-decoration: none;
67
+ border-radius: 6px;
68
+ font-weight: bold;
69
+ font-family: sans-serif;
70
+ ">👀 See what we built 👀</a>
71
+ </th>
72
+ <th>
73
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
74
+ display: inline-block;
75
+ padding: 8px 16px;
76
+ background-color: #FF7F50;
77
+ color: white;
78
+ text-decoration: none;
79
+ border-radius: 6px;
80
+ font-weight: bold;
81
+ font-family: sans-serif;
82
+ ">👀 See what we built 👀</a>
83
+ </th>
84
+ </tr>
85
+ </table>
86
+ ## Prompt template
87
+
88
+ ```
89
+ <|im_start|>system
90
+ {system_prompt}<|im_end|>
91
+ <|im_start|>user
92
+ {prompt}<|im_end|>
93
+ <|im_start|>assistant
94
+ ```
95
+
96
+ ## Model file specification
97
+
98
+ | Filename | Quant type | File Size | Description |
99
+ | -------- | ---------- | --------- | ----------- |
100
+ | [Qwen2.5-3B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q2_K.gguf) | Q2_K | 1.275 GB | smallest, significant quality loss - not recommended for most purposes |
101
+ | [Qwen2.5-3B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_S.gguf) | Q3_K_S | 1.454 GB | very small, high quality loss |
102
+ | [Qwen2.5-3B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_M.gguf) | Q3_K_M | 1.590 GB | very small, high quality loss |
103
+ | [Qwen2.5-3B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q3_K_L.gguf) | Q3_K_L | 1.707 GB | small, substantial quality loss |
104
+ | [Qwen2.5-3B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q4_0.gguf) | Q4_0 | 1.823 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
105
+ | [Qwen2.5-3B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q4_K_S.gguf) | Q4_K_S | 1.834 GB | small, greater quality loss |
106
+ | [Qwen2.5-3B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q4_K_M.gguf) | Q4_K_M | 1.930 GB | medium, balanced quality - recommended |
107
+ | [Qwen2.5-3B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q5_0.gguf) | Q5_0 | 2.170 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
108
+ | [Qwen2.5-3B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q5_K_S.gguf) | Q5_K_S | 2.170 GB | large, low quality loss - recommended |
109
+ | [Qwen2.5-3B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q5_K_M.gguf) | Q5_K_M | 2.225 GB | large, very low quality loss - recommended |
110
+ | [Qwen2.5-3B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q6_K.gguf) | Q6_K | 2.538 GB | very large, extremely low quality loss |
111
+ | [Qwen2.5-3B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-3B-Instruct-GGUF/blob/main/Qwen2.5-3B-Instruct-Q8_0.gguf) | Q8_0 | 3.285 GB | very large, extremely low quality loss - not recommended |
112
+
113
+
114
+ ## Downloading instruction
115
+
116
+ ### Command line
117
+
118
+ Firstly, install Huggingface Client
119
+
120
+ ```shell
121
+ pip install -U "huggingface_hub[cli]"
122
+ ```
123
+
124
+ Then, downoad the individual model file the a local directory
125
+
126
+ ```shell
127
+ huggingface-cli download tensorblock/Qwen2.5-3B-Instruct-GGUF --include "Qwen2.5-3B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
128
+ ```
129
+
130
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
131
+
132
+ ```shell
133
+ huggingface-cli download tensorblock/Qwen2.5-3B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
134
+ ```