Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +81 -68
README.md CHANGED
@@ -1,68 +1,81 @@
1
- ---
2
- base_model:
3
- - c10x/CoT-2.5
4
- - Qwen/Qwen2.5-7B
5
- - EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1
6
- - huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
7
- - Cran-May/T.E-8.1
8
- library_name: transformers
9
- tags:
10
- - mergekit
11
- - merge
12
-
13
- ---
14
- # merge
15
-
16
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
-
18
- ## Merge Details
19
- ### Merge Method
20
-
21
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
22
-
23
- ### Models Merged
24
-
25
- The following models were included in the merge:
26
- * [c10x/CoT-2.5](https://huggingface.co/c10x/CoT-2.5)
27
- * [EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1)
28
- * [huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2)
29
- * [Cran-May/T.E-8.1](https://huggingface.co/Cran-May/T.E-8.1)
30
-
31
- ### Configuration
32
-
33
- The following YAML configuration was used to produce this model:
34
-
35
- ```yaml
36
-
37
- models:
38
- - model: EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1
39
- parameters:
40
- weight: 1
41
- density: 1
42
-
43
- - model: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
44
- parameters:
45
- weight: 1
46
- density: 1
47
-
48
-
49
- - model: c10x/CoT-2.5
50
- parameters:
51
- weight: 0.1
52
- density: 0.1
53
-
54
- - model: Cran-May/T.E-8.1
55
- parameters:
56
- weight: 0.1
57
- density: 0.1
58
-
59
-
60
- merge_method: ties
61
- base_model: Qwen/Qwen2.5-7B
62
- parameters:
63
- density: 1
64
- normalize: true
65
- int8_mask: true
66
- dtype: bfloat16
67
-
68
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - c10x/CoT-2.5
4
+ - Qwen/Qwen2.5-7B
5
+ - EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1
6
+ - huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
7
+ - Cran-May/T.E-8.1
8
+ library_name: transformers
9
+ tags:
10
+ - mergekit
11
+ - merge
12
+ language:
13
+ - zho
14
+ - eng
15
+ - fra
16
+ - spa
17
+ - por
18
+ - deu
19
+ - ita
20
+ - rus
21
+ - jpn
22
+ - kor
23
+ - vie
24
+ - tha
25
+ - ara
26
+ ---
27
+ # merge
28
+
29
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
30
+
31
+ ## Merge Details
32
+ ### Merge Method
33
+
34
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
35
+
36
+ ### Models Merged
37
+
38
+ The following models were included in the merge:
39
+ * [c10x/CoT-2.5](https://huggingface.co/c10x/CoT-2.5)
40
+ * [EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1)
41
+ * [huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2](https://huggingface.co/huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2)
42
+ * [Cran-May/T.E-8.1](https://huggingface.co/Cran-May/T.E-8.1)
43
+
44
+ ### Configuration
45
+
46
+ The following YAML configuration was used to produce this model:
47
+
48
+ ```yaml
49
+
50
+ models:
51
+ - model: EVA-UNIT-01/EVA-Qwen2.5-7B-v0.1
52
+ parameters:
53
+ weight: 1
54
+ density: 1
55
+
56
+ - model: huihui-ai/Qwen2.5-7B-Instruct-abliterated-v2
57
+ parameters:
58
+ weight: 1
59
+ density: 1
60
+
61
+
62
+ - model: c10x/CoT-2.5
63
+ parameters:
64
+ weight: 0.1
65
+ density: 0.1
66
+
67
+ - model: Cran-May/T.E-8.1
68
+ parameters:
69
+ weight: 0.1
70
+ density: 0.1
71
+
72
+
73
+ merge_method: ties
74
+ base_model: Qwen/Qwen2.5-7B
75
+ parameters:
76
+ density: 1
77
+ normalize: true
78
+ int8_mask: true
79
+ dtype: bfloat16
80
+
81
+ ```