Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +133 -121
README.md CHANGED
@@ -1,121 +1,133 @@
1
- ---
2
- license: apache-2.0
3
- license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
4
- language:
5
- - en
6
- pipeline_tag: text-generation
7
- base_model: Qwen/Qwen2.5-32B-Instruct
8
- tags:
9
- - chat
10
- - TensorBlock
11
- - GGUF
12
- library_name: transformers
13
- ---
14
-
15
- <div style="width: auto; margin-left: auto; margin-right: auto">
16
- <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
17
- </div>
18
- <div style="display: flex; justify-content: space-between; width: 100%;">
19
- <div style="display: flex; flex-direction: column; align-items: flex-start;">
20
- <p style="margin-top: 0.5em; margin-bottom: 0em;">
21
- Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
22
- </p>
23
- </div>
24
- </div>
25
-
26
- ## Qwen/Qwen2.5-32B-Instruct - GGUF
27
-
28
- This repo contains GGUF format model files for [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
29
-
30
- The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit ec7f3ac](https://github.com/ggerganov/llama.cpp/commit/ec7f3ac9ab33e46b136eb5ab6a76c4d81f57c7f1).
31
-
32
- ## Our projects
33
- <table border="1" cellspacing="0" cellpadding="10">
34
- <tr>
35
- <th style="font-size: 25px;">Awesome MCP Servers</th>
36
- <th style="font-size: 25px;">TensorBlock Studio</th>
37
- </tr>
38
- <tr>
39
- <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
40
- <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
41
- </tr>
42
- <tr>
43
- <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
44
- <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
45
- </tr>
46
- <tr>
47
- <th>
48
- <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
49
- display: inline-block;
50
- padding: 8px 16px;
51
- background-color: #FF7F50;
52
- color: white;
53
- text-decoration: none;
54
- border-radius: 6px;
55
- font-weight: bold;
56
- font-family: sans-serif;
57
- ">👀 See what we built 👀</a>
58
- </th>
59
- <th>
60
- <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
61
- display: inline-block;
62
- padding: 8px 16px;
63
- background-color: #FF7F50;
64
- color: white;
65
- text-decoration: none;
66
- border-radius: 6px;
67
- font-weight: bold;
68
- font-family: sans-serif;
69
- ">👀 See what we built 👀</a>
70
- </th>
71
- </tr>
72
- </table>
73
- ## Prompt template
74
-
75
- ```
76
- <|im_start|>system
77
- {system_prompt}<|im_end|>
78
- <|im_start|>user
79
- {prompt}<|im_end|>
80
- <|im_start|>assistant
81
- ```
82
-
83
- ## Model file specification
84
-
85
- | Filename | Quant type | File Size | Description |
86
- | -------- | ---------- | --------- | ----------- |
87
- | [Qwen2.5-32B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q2_K.gguf) | Q2_K | 12.313 GB | smallest, significant quality loss - not recommended for most purposes |
88
- | [Qwen2.5-32B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_S.gguf) | Q3_K_S | 14.392 GB | very small, high quality loss |
89
- | [Qwen2.5-32B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_M.gguf) | Q3_K_M | 15.935 GB | very small, high quality loss |
90
- | [Qwen2.5-32B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_L.gguf) | Q3_K_L | 17.247 GB | small, substantial quality loss |
91
- | [Qwen2.5-32B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_0.gguf) | Q4_0 | 18.640 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
92
- | [Qwen2.5-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 18.784 GB | small, greater quality loss |
93
- | [Qwen2.5-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 19.851 GB | medium, balanced quality - recommended |
94
- | [Qwen2.5-32B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_0.gguf) | Q5_0 | 22.638 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
95
- | [Qwen2.5-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 22.638 GB | large, low quality loss - recommended |
96
- | [Qwen2.5-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 23.262 GB | large, very low quality loss - recommended |
97
- | [Qwen2.5-32B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q6_K.gguf) | Q6_K | 26.886 GB | very large, extremely low quality loss |
98
- | [Qwen2.5-32B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q8_0.gguf) | Q8_0 | 34.821 GB | very large, extremely low quality loss - not recommended |
99
-
100
-
101
- ## Downloading instruction
102
-
103
- ### Command line
104
-
105
- Firstly, install Huggingface Client
106
-
107
- ```shell
108
- pip install -U "huggingface_hub[cli]"
109
- ```
110
-
111
- Then, downoad the individual model file the a local directory
112
-
113
- ```shell
114
- huggingface-cli download tensorblock/Qwen2.5-32B-Instruct-GGUF --include "Qwen2.5-32B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
115
- ```
116
-
117
- If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
118
-
119
- ```shell
120
- huggingface-cli download tensorblock/Qwen2.5-32B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
121
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE
4
+ language:
5
+ - zho
6
+ - eng
7
+ - fra
8
+ - spa
9
+ - por
10
+ - deu
11
+ - ita
12
+ - rus
13
+ - jpn
14
+ - kor
15
+ - vie
16
+ - tha
17
+ - ara
18
+ pipeline_tag: text-generation
19
+ base_model: Qwen/Qwen2.5-32B-Instruct
20
+ tags:
21
+ - chat
22
+ - TensorBlock
23
+ - GGUF
24
+ library_name: transformers
25
+ ---
26
+
27
+ <div style="width: auto; margin-left: auto; margin-right: auto">
28
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
29
+ </div>
30
+ <div style="display: flex; justify-content: space-between; width: 100%;">
31
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
32
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
33
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
34
+ </p>
35
+ </div>
36
+ </div>
37
+
38
+ ## Qwen/Qwen2.5-32B-Instruct - GGUF
39
+
40
+ This repo contains GGUF format model files for [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
41
+
42
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit ec7f3ac](https://github.com/ggerganov/llama.cpp/commit/ec7f3ac9ab33e46b136eb5ab6a76c4d81f57c7f1).
43
+
44
+ ## Our projects
45
+ <table border="1" cellspacing="0" cellpadding="10">
46
+ <tr>
47
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
48
+ <th style="font-size: 25px;">TensorBlock Studio</th>
49
+ </tr>
50
+ <tr>
51
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
52
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
53
+ </tr>
54
+ <tr>
55
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
56
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
57
+ </tr>
58
+ <tr>
59
+ <th>
60
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
61
+ display: inline-block;
62
+ padding: 8px 16px;
63
+ background-color: #FF7F50;
64
+ color: white;
65
+ text-decoration: none;
66
+ border-radius: 6px;
67
+ font-weight: bold;
68
+ font-family: sans-serif;
69
+ ">👀 See what we built 👀</a>
70
+ </th>
71
+ <th>
72
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
73
+ display: inline-block;
74
+ padding: 8px 16px;
75
+ background-color: #FF7F50;
76
+ color: white;
77
+ text-decoration: none;
78
+ border-radius: 6px;
79
+ font-weight: bold;
80
+ font-family: sans-serif;
81
+ ">👀 See what we built 👀</a>
82
+ </th>
83
+ </tr>
84
+ </table>
85
+ ## Prompt template
86
+
87
+ ```
88
+ <|im_start|>system
89
+ {system_prompt}<|im_end|>
90
+ <|im_start|>user
91
+ {prompt}<|im_end|>
92
+ <|im_start|>assistant
93
+ ```
94
+
95
+ ## Model file specification
96
+
97
+ | Filename | Quant type | File Size | Description |
98
+ | -------- | ---------- | --------- | ----------- |
99
+ | [Qwen2.5-32B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q2_K.gguf) | Q2_K | 12.313 GB | smallest, significant quality loss - not recommended for most purposes |
100
+ | [Qwen2.5-32B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_S.gguf) | Q3_K_S | 14.392 GB | very small, high quality loss |
101
+ | [Qwen2.5-32B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_M.gguf) | Q3_K_M | 15.935 GB | very small, high quality loss |
102
+ | [Qwen2.5-32B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q3_K_L.gguf) | Q3_K_L | 17.247 GB | small, substantial quality loss |
103
+ | [Qwen2.5-32B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_0.gguf) | Q4_0 | 18.640 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
104
+ | [Qwen2.5-32B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_S.gguf) | Q4_K_S | 18.784 GB | small, greater quality loss |
105
+ | [Qwen2.5-32B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q4_K_M.gguf) | Q4_K_M | 19.851 GB | medium, balanced quality - recommended |
106
+ | [Qwen2.5-32B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_0.gguf) | Q5_0 | 22.638 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
107
+ | [Qwen2.5-32B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_S.gguf) | Q5_K_S | 22.638 GB | large, low quality loss - recommended |
108
+ | [Qwen2.5-32B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q5_K_M.gguf) | Q5_K_M | 23.262 GB | large, very low quality loss - recommended |
109
+ | [Qwen2.5-32B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q6_K.gguf) | Q6_K | 26.886 GB | very large, extremely low quality loss |
110
+ | [Qwen2.5-32B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-32B-Instruct-GGUF/blob/main/Qwen2.5-32B-Instruct-Q8_0.gguf) | Q8_0 | 34.821 GB | very large, extremely low quality loss - not recommended |
111
+
112
+
113
+ ## Downloading instruction
114
+
115
+ ### Command line
116
+
117
+ Firstly, install Huggingface Client
118
+
119
+ ```shell
120
+ pip install -U "huggingface_hub[cli]"
121
+ ```
122
+
123
+ Then, downoad the individual model file the a local directory
124
+
125
+ ```shell
126
+ huggingface-cli download tensorblock/Qwen2.5-32B-Instruct-GGUF --include "Qwen2.5-32B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
127
+ ```
128
+
129
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
130
+
131
+ ```shell
132
+ huggingface-cli download tensorblock/Qwen2.5-32B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
133
+ ```