Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +135 -123
README.md CHANGED
@@ -1,123 +1,135 @@
1
- ---
2
- license: apache-2.0
3
- license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
4
- language:
5
- - en
6
- pipeline_tag: text-generation
7
- base_model: Qwen/Qwen2.5-7B-Instruct
8
- tags:
9
- - chat
10
- - TensorBlock
11
- - GGUF
12
- library_name: transformers
13
- ---
14
-
15
- <div style="width: auto; margin-left: auto; margin-right: auto">
16
- <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
17
- </div>
18
- <div style="display: flex; justify-content: space-between; width: 100%;">
19
- <div style="display: flex; flex-direction: column; align-items: flex-start;">
20
- <p style="margin-top: 0.5em; margin-bottom: 0em;">
21
- Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
22
- </p>
23
- </div>
24
- </div>
25
-
26
- ## Qwen/Qwen2.5-7B-Instruct - GGUF
27
-
28
- This repo contains GGUF format model files for [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
29
-
30
- The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
31
-
32
-
33
- ## Our projects
34
- <table border="1" cellspacing="0" cellpadding="10">
35
- <tr>
36
- <th style="font-size: 25px;">Awesome MCP Servers</th>
37
- <th style="font-size: 25px;">TensorBlock Studio</th>
38
- </tr>
39
- <tr>
40
- <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
41
- <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
42
- </tr>
43
- <tr>
44
- <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
45
- <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
46
- </tr>
47
- <tr>
48
- <th>
49
- <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
50
- display: inline-block;
51
- padding: 8px 16px;
52
- background-color: #FF7F50;
53
- color: white;
54
- text-decoration: none;
55
- border-radius: 6px;
56
- font-weight: bold;
57
- font-family: sans-serif;
58
- ">👀 See what we built 👀</a>
59
- </th>
60
- <th>
61
- <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
62
- display: inline-block;
63
- padding: 8px 16px;
64
- background-color: #FF7F50;
65
- color: white;
66
- text-decoration: none;
67
- border-radius: 6px;
68
- font-weight: bold;
69
- font-family: sans-serif;
70
- ">👀 See what we built 👀</a>
71
- </th>
72
- </tr>
73
- </table>
74
- ## Prompt template
75
-
76
-
77
- ```
78
- <|im_start|>system
79
- {system_prompt}<|im_end|>
80
- <|im_start|>user
81
- {prompt}<|im_end|>
82
- <|im_start|>assistant
83
- ```
84
-
85
- ## Model file specification
86
-
87
- | Filename | Quant type | File Size | Description |
88
- | -------- | ---------- | --------- | ----------- |
89
- | [Qwen2.5-7B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q2_K.gguf) | Q2_K | 2.809 GB | smallest, significant quality loss - not recommended for most purposes |
90
- | [Qwen2.5-7B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.253 GB | very small, high quality loss |
91
- | [Qwen2.5-7B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3.547 GB | very small, high quality loss |
92
- | [Qwen2.5-7B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3.808 GB | small, substantial quality loss |
93
- | [Qwen2.5-7B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q4_0.gguf) | Q4_0 | 4.127 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
94
- | [Qwen2.5-7B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.152 GB | small, greater quality loss |
95
- | [Qwen2.5-7B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.361 GB | medium, balanced quality - recommended |
96
- | [Qwen2.5-7B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q5_0.gguf) | Q5_0 | 4.950 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
97
- | [Qwen2.5-7B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q5_K_S.gguf) | Q5_K_S | 4.950 GB | large, low quality loss - recommended |
98
- | [Qwen2.5-7B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.071 GB | large, very low quality loss - recommended |
99
- | [Qwen2.5-7B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q6_K.gguf) | Q6_K | 5.825 GB | very large, extremely low quality loss |
100
- | [Qwen2.5-7B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q8_0.gguf) | Q8_0 | 7.542 GB | very large, extremely low quality loss - not recommended |
101
-
102
-
103
- ## Downloading instruction
104
-
105
- ### Command line
106
-
107
- Firstly, install Huggingface Client
108
-
109
- ```shell
110
- pip install -U "huggingface_hub[cli]"
111
- ```
112
-
113
- Then, downoad the individual model file the a local directory
114
-
115
- ```shell
116
- huggingface-cli download tensorblock/Qwen2.5-7B-Instruct-GGUF --include "Qwen2.5-7B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
117
- ```
118
-
119
- If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
120
-
121
- ```shell
122
- huggingface-cli download tensorblock/Qwen2.5-7B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
123
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE
4
+ language:
5
+ - zho
6
+ - eng
7
+ - fra
8
+ - spa
9
+ - por
10
+ - deu
11
+ - ita
12
+ - rus
13
+ - jpn
14
+ - kor
15
+ - vie
16
+ - tha
17
+ - ara
18
+ pipeline_tag: text-generation
19
+ base_model: Qwen/Qwen2.5-7B-Instruct
20
+ tags:
21
+ - chat
22
+ - TensorBlock
23
+ - GGUF
24
+ library_name: transformers
25
+ ---
26
+
27
+ <div style="width: auto; margin-left: auto; margin-right: auto">
28
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
29
+ </div>
30
+ <div style="display: flex; justify-content: space-between; width: 100%;">
31
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
32
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
33
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
34
+ </p>
35
+ </div>
36
+ </div>
37
+
38
+ ## Qwen/Qwen2.5-7B-Instruct - GGUF
39
+
40
+ This repo contains GGUF format model files for [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
41
+
42
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
43
+
44
+
45
+ ## Our projects
46
+ <table border="1" cellspacing="0" cellpadding="10">
47
+ <tr>
48
+ <th style="font-size: 25px;">Awesome MCP Servers</th>
49
+ <th style="font-size: 25px;">TensorBlock Studio</th>
50
+ </tr>
51
+ <tr>
52
+ <th><img src="https://imgur.com/2Xov7B7.jpeg" alt="Project A" width="450"/></th>
53
+ <th><img src="https://imgur.com/pJcmF5u.jpeg" alt="Project B" width="450"/></th>
54
+ </tr>
55
+ <tr>
56
+ <th>A comprehensive collection of Model Context Protocol (MCP) servers.</th>
57
+ <th>A lightweight, open, and extensible multi-LLM interaction studio.</th>
58
+ </tr>
59
+ <tr>
60
+ <th>
61
+ <a href="https://github.com/TensorBlock/awesome-mcp-servers" target="_blank" style="
62
+ display: inline-block;
63
+ padding: 8px 16px;
64
+ background-color: #FF7F50;
65
+ color: white;
66
+ text-decoration: none;
67
+ border-radius: 6px;
68
+ font-weight: bold;
69
+ font-family: sans-serif;
70
+ ">👀 See what we built 👀</a>
71
+ </th>
72
+ <th>
73
+ <a href="https://github.com/TensorBlock/TensorBlock-Studio" target="_blank" style="
74
+ display: inline-block;
75
+ padding: 8px 16px;
76
+ background-color: #FF7F50;
77
+ color: white;
78
+ text-decoration: none;
79
+ border-radius: 6px;
80
+ font-weight: bold;
81
+ font-family: sans-serif;
82
+ ">👀 See what we built 👀</a>
83
+ </th>
84
+ </tr>
85
+ </table>
86
+ ## Prompt template
87
+
88
+
89
+ ```
90
+ <|im_start|>system
91
+ {system_prompt}<|im_end|>
92
+ <|im_start|>user
93
+ {prompt}<|im_end|>
94
+ <|im_start|>assistant
95
+ ```
96
+
97
+ ## Model file specification
98
+
99
+ | Filename | Quant type | File Size | Description |
100
+ | -------- | ---------- | --------- | ----------- |
101
+ | [Qwen2.5-7B-Instruct-Q2_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q2_K.gguf) | Q2_K | 2.809 GB | smallest, significant quality loss - not recommended for most purposes |
102
+ | [Qwen2.5-7B-Instruct-Q3_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q3_K_S.gguf) | Q3_K_S | 3.253 GB | very small, high quality loss |
103
+ | [Qwen2.5-7B-Instruct-Q3_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q3_K_M.gguf) | Q3_K_M | 3.547 GB | very small, high quality loss |
104
+ | [Qwen2.5-7B-Instruct-Q3_K_L.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q3_K_L.gguf) | Q3_K_L | 3.808 GB | small, substantial quality loss |
105
+ | [Qwen2.5-7B-Instruct-Q4_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q4_0.gguf) | Q4_0 | 4.127 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
106
+ | [Qwen2.5-7B-Instruct-Q4_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q4_K_S.gguf) | Q4_K_S | 4.152 GB | small, greater quality loss |
107
+ | [Qwen2.5-7B-Instruct-Q4_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q4_K_M.gguf) | Q4_K_M | 4.361 GB | medium, balanced quality - recommended |
108
+ | [Qwen2.5-7B-Instruct-Q5_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q5_0.gguf) | Q5_0 | 4.950 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
109
+ | [Qwen2.5-7B-Instruct-Q5_K_S.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q5_K_S.gguf) | Q5_K_S | 4.950 GB | large, low quality loss - recommended |
110
+ | [Qwen2.5-7B-Instruct-Q5_K_M.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q5_K_M.gguf) | Q5_K_M | 5.071 GB | large, very low quality loss - recommended |
111
+ | [Qwen2.5-7B-Instruct-Q6_K.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q6_K.gguf) | Q6_K | 5.825 GB | very large, extremely low quality loss |
112
+ | [Qwen2.5-7B-Instruct-Q8_0.gguf](https://huggingface.co/tensorblock/Qwen2.5-7B-Instruct-GGUF/blob/main/Qwen2.5-7B-Instruct-Q8_0.gguf) | Q8_0 | 7.542 GB | very large, extremely low quality loss - not recommended |
113
+
114
+
115
+ ## Downloading instruction
116
+
117
+ ### Command line
118
+
119
+ Firstly, install Huggingface Client
120
+
121
+ ```shell
122
+ pip install -U "huggingface_hub[cli]"
123
+ ```
124
+
125
+ Then, downoad the individual model file the a local directory
126
+
127
+ ```shell
128
+ huggingface-cli download tensorblock/Qwen2.5-7B-Instruct-GGUF --include "Qwen2.5-7B-Instruct-Q2_K.gguf" --local-dir MY_LOCAL_DIR
129
+ ```
130
+
131
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
132
+
133
+ ```shell
134
+ huggingface-cli download tensorblock/Qwen2.5-7B-Instruct-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
135
+ ```