Improve language tag

#3
by lbourdois - opened
Files changed (1) hide show
  1. README.md +92 -80
README.md CHANGED
@@ -1,80 +1,92 @@
1
- ---
2
- license: other
3
- license_name: tongyi-qianwen
4
- license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
5
- language:
6
- - en
7
- pipeline_tag: text-generation
8
- base_model: Qwen/Qwen2.5-72B-Instruct
9
- tags:
10
- - chat
11
- quantized_by: bartowski
12
- base_model_relation: quantized
13
- ---
14
-
15
- ## Exllama v2 Quantizations of Qwen2.5-72B-Instruct
16
-
17
- Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.2.2">turboderp's ExLlamaV2 v0.2.2</a> for quantization.
18
-
19
- <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
20
-
21
- Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
22
-
23
- Conversion was done using the default calibration dataset.
24
-
25
- Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
26
-
27
- Original model: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct
28
-
29
-
30
- <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/8_0">8.0 bits per weight</a>
31
-
32
- <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/6_5">6.5 bits per weight</a>
33
-
34
- <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/5_0">5.0 bits per weight</a>
35
-
36
- <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/4_25">4.25 bits per weight</a>
37
-
38
- <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/3_5">3.5 bits per weight</a>
39
-
40
- <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/3_0">3.0 bits per weight</a>
41
-
42
- <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/2_2">2.2 bits per weight</a>
43
-
44
-
45
- ## Download instructions
46
-
47
- With git:
48
-
49
- ```shell
50
- git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2
51
- ```
52
-
53
- With huggingface hub (credit to TheBloke for instructions):
54
-
55
- ```shell
56
- pip3 install huggingface-hub
57
- ```
58
-
59
- To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Qwen2.5-72B-Instruct-exl2`:
60
-
61
- ```shell
62
- mkdir Qwen2.5-72B-Instruct-exl2
63
- huggingface-cli download bartowski/Qwen2.5-72B-Instruct-exl2 --local-dir Qwen2.5-72B-Instruct-exl2
64
- ```
65
-
66
- To download from a different branch, add the `--revision` parameter:
67
-
68
- Linux:
69
-
70
- ```shell
71
- mkdir Qwen2.5-72B-Instruct-exl2-6_5
72
- huggingface-cli download bartowski/Qwen2.5-72B-Instruct-exl2 --revision 6_5 --local-dir Qwen2.5-72B-Instruct-exl2-6_5
73
- ```
74
-
75
- Windows (which apparently doesn't like _ in folders sometimes?):
76
-
77
- ```shell
78
- mkdir Qwen2.5-72B-Instruct-exl2-6.5
79
- huggingface-cli download bartowski/Qwen2.5-72B-Instruct-exl2 --revision 6_5 --local-dir Qwen2.5-72B-Instruct-exl2-6.5
80
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: tongyi-qianwen
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
5
+ language:
6
+ - zho
7
+ - eng
8
+ - fra
9
+ - spa
10
+ - por
11
+ - deu
12
+ - ita
13
+ - rus
14
+ - jpn
15
+ - kor
16
+ - vie
17
+ - tha
18
+ - ara
19
+ pipeline_tag: text-generation
20
+ base_model: Qwen/Qwen2.5-72B-Instruct
21
+ tags:
22
+ - chat
23
+ quantized_by: bartowski
24
+ base_model_relation: quantized
25
+ ---
26
+
27
+ ## Exllama v2 Quantizations of Qwen2.5-72B-Instruct
28
+
29
+ Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.2.2">turboderp's ExLlamaV2 v0.2.2</a> for quantization.
30
+
31
+ <b>The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)</b>
32
+
33
+ Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
34
+
35
+ Conversion was done using the default calibration dataset.
36
+
37
+ Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
38
+
39
+ Original model: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct
40
+
41
+
42
+ <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/8_0">8.0 bits per weight</a>
43
+
44
+ <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/6_5">6.5 bits per weight</a>
45
+
46
+ <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/5_0">5.0 bits per weight</a>
47
+
48
+ <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/4_25">4.25 bits per weight</a>
49
+
50
+ <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/3_5">3.5 bits per weight</a>
51
+
52
+ <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/3_0">3.0 bits per weight</a>
53
+
54
+ <a href="https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2/tree/2_2">2.2 bits per weight</a>
55
+
56
+
57
+ ## Download instructions
58
+
59
+ With git:
60
+
61
+ ```shell
62
+ git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Qwen2.5-72B-Instruct-exl2
63
+ ```
64
+
65
+ With huggingface hub (credit to TheBloke for instructions):
66
+
67
+ ```shell
68
+ pip3 install huggingface-hub
69
+ ```
70
+
71
+ To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Qwen2.5-72B-Instruct-exl2`:
72
+
73
+ ```shell
74
+ mkdir Qwen2.5-72B-Instruct-exl2
75
+ huggingface-cli download bartowski/Qwen2.5-72B-Instruct-exl2 --local-dir Qwen2.5-72B-Instruct-exl2
76
+ ```
77
+
78
+ To download from a different branch, add the `--revision` parameter:
79
+
80
+ Linux:
81
+
82
+ ```shell
83
+ mkdir Qwen2.5-72B-Instruct-exl2-6_5
84
+ huggingface-cli download bartowski/Qwen2.5-72B-Instruct-exl2 --revision 6_5 --local-dir Qwen2.5-72B-Instruct-exl2-6_5
85
+ ```
86
+
87
+ Windows (which apparently doesn't like _ in folders sometimes?):
88
+
89
+ ```shell
90
+ mkdir Qwen2.5-72B-Instruct-exl2-6.5
91
+ huggingface-cli download bartowski/Qwen2.5-72B-Instruct-exl2 --revision 6_5 --local-dir Qwen2.5-72B-Instruct-exl2-6.5
92
+ ```