Improve language tag

#2
by lbourdois - opened
Files changed (1) hide show
  1. README.md +93 -79
README.md CHANGED
@@ -1,80 +1,94 @@
1
- ---
2
- base_model:
3
- - EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1
4
- - Cran-May/T.E-8.1
5
- - huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
6
- - bunnycore/Qwen2.5-7B-HyperMix
7
- - Qwen/Qwen2.5-7B-Instruct
8
- - Qwen/Qwen2.5-7B
9
- - c10x/CoT-2.5
10
- library_name: transformers
11
- tags:
12
- - mergekit
13
- - merge
14
- license: apache-2.0
15
- ---
16
- # merge
17
-
18
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
-
20
- ## Merge Details
21
- ### Merge Method
22
-
23
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
24
-
25
- ### Models Merged
26
-
27
- The following models were included in the merge:
28
- * [EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1)
29
- * [Cran-May/T.E-8.1](https://huggingface.co/Cran-May/T.E-8.1)
30
- * [huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2)
31
- * [bunnycore/Qwen2.5-7B-HyperMix](https://huggingface.co/bunnycore/Qwen2.5-7B-HyperMix)
32
- * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
33
- * [c10x/CoT-2.5](https://huggingface.co/c10x/CoT-2.5)
34
-
35
- ### Configuration
36
-
37
- The following YAML configuration was used to produce this model:
38
-
39
- ```yaml
40
-
41
- models:
42
- - model: EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1
43
- parameters:
44
- weight: 1
45
- density: 1
46
-
47
- - model: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
48
- parameters:
49
- weight: 1
50
- density: 1
51
-
52
- - model: bunnycore/Qwen2.5-7B-HyperMix
53
- parameters:
54
- weight: 0.8
55
- density: 0.8
56
-
57
- - model: c10x/CoT-2.5
58
- parameters:
59
- weight: 0.5
60
- density: 0.5
61
-
62
- - model: Cran-May/T.E-8.1
63
- parameters:
64
- weight: 0.5
65
- density: 0.5
66
-
67
- - model: Qwen/Qwen2.5-7B-Instruct
68
- parameters:
69
- weight: 0.3
70
- density: 0.3
71
-
72
- merge_method: ties
73
- base_model: Qwen/Qwen2.5-7B
74
- parameters:
75
- density: 1
76
- normalize: true
77
- int8_mask: true
78
- dtype: bfloat16
79
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
  ```
 
1
+ ---
2
+ base_model:
3
+ - EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1
4
+ - Cran-May/T.E-8.1
5
+ - huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
6
+ - bunnycore/Qwen2.5-7B-HyperMix
7
+ - Qwen/Qwen2.5-7B-Instruct
8
+ - Qwen/Qwen2.5-7B
9
+ - c10x/CoT-2.5
10
+ library_name: transformers
11
+ tags:
12
+ - mergekit
13
+ - merge
14
+ license: apache-2.0
15
+ language:
16
+ - zho
17
+ - eng
18
+ - fra
19
+ - spa
20
+ - por
21
+ - deu
22
+ - ita
23
+ - rus
24
+ - jpn
25
+ - kor
26
+ - vie
27
+ - tha
28
+ - ara
29
+ ---
30
+ # merge
31
+
32
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
33
+
34
+ ## Merge Details
35
+ ### Merge Method
36
+
37
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
38
+
39
+ ### Models Merged
40
+
41
+ The following models were included in the merge:
42
+ * [EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1)
43
+ * [Cran-May/T.E-8.1](https://huggingface.co/Cran-May/T.E-8.1)
44
+ * [huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2)
45
+ * [bunnycore/Qwen2.5-7B-HyperMix](https://huggingface.co/bunnycore/Qwen2.5-7B-HyperMix)
46
+ * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
47
+ * [c10x/CoT-2.5](https://huggingface.co/c10x/CoT-2.5)
48
+
49
+ ### Configuration
50
+
51
+ The following YAML configuration was used to produce this model:
52
+
53
+ ```yaml
54
+
55
+ models:
56
+ - model: EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1
57
+ parameters:
58
+ weight: 1
59
+ density: 1
60
+
61
+ - model: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
62
+ parameters:
63
+ weight: 1
64
+ density: 1
65
+
66
+ - model: bunnycore/Qwen2.5-7B-HyperMix
67
+ parameters:
68
+ weight: 0.8
69
+ density: 0.8
70
+
71
+ - model: c10x/CoT-2.5
72
+ parameters:
73
+ weight: 0.5
74
+ density: 0.5
75
+
76
+ - model: Cran-May/T.E-8.1
77
+ parameters:
78
+ weight: 0.5
79
+ density: 0.5
80
+
81
+ - model: Qwen/Qwen2.5-7B-Instruct
82
+ parameters:
83
+ weight: 0.3
84
+ density: 0.3
85
+
86
+ merge_method: ties
87
+ base_model: Qwen/Qwen2.5-7B
88
+ parameters:
89
+ density: 1
90
+ normalize: true
91
+ int8_mask: true
92
+ dtype: bfloat16
93
+
94
  ```