lbourdois commited on
Commit
ef8fe45
·
verified ·
1 Parent(s): 0175942

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +72 -59
README.md CHANGED
@@ -1,59 +1,72 @@
1
- ---
2
- base_model:
3
- - fblgit/cybertron-v4-qw7B-MGS
4
- - bunnycore/QandoraExp-7B-Persona
5
- - Qwen/Qwen2.5-7B
6
- - rombodawg/Rombos-LLM-V2.5-Qwen-7b
7
- library_name: transformers
8
- tags:
9
- - mergekit
10
- - merge
11
-
12
- ---
13
- # merge
14
-
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
-
17
- ## Merge Details
18
- ### Merge Method
19
-
20
- This model was merged using the della merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
21
-
22
- ### Models Merged
23
-
24
- The following models were included in the merge:
25
- * [fblgit/cybertron-v4-qw7B-MGS](https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS)
26
- * [bunnycore/QandoraExp-7B-Persona](https://huggingface.co/bunnycore/QandoraExp-7B-Persona)
27
- * [rombodawg/Rombos-LLM-V2.5-Qwen-7b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-7b)
28
-
29
- ### Configuration
30
-
31
- The following YAML configuration was used to produce this model:
32
-
33
- ```yaml
34
-
35
- models:
36
- - model: bunnycore/QandoraExp-7B-Persona
37
- parameters:
38
- weight: 0.2
39
- density: 0.2
40
- - model: rombodawg/Rombos-LLM-V2.5-Qwen-7b
41
- parameters:
42
- weight: 0.4
43
- density: 0.4
44
- lambda: 0.9
45
- - model: fblgit/cybertron-v4-qw7B-MGS
46
- parameters:
47
- weight: 0.4
48
- density: 0.4
49
- lambda: 0.9
50
- merge_method: della
51
- base_model: Qwen/Qwen2.5-7B
52
- parameters:
53
- weight: 1
54
- density: 1
55
- lambda: 0.9
56
- int8_mask: true
57
- dtype: bfloat16
58
-
59
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - fblgit/cybertron-v4-qw7B-MGS
4
+ - bunnycore/QandoraExp-7B-Persona
5
+ - Qwen/Qwen2.5-7B
6
+ - rombodawg/Rombos-LLM-V2.5-Qwen-7b
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+ language:
12
+ - zho
13
+ - eng
14
+ - fra
15
+ - spa
16
+ - por
17
+ - deu
18
+ - ita
19
+ - rus
20
+ - jpn
21
+ - kor
22
+ - vie
23
+ - tha
24
+ - ara
25
+ ---
26
+ # merge
27
+
28
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
29
+
30
+ ## Merge Details
31
+ ### Merge Method
32
+
33
+ This model was merged using the della merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
34
+
35
+ ### Models Merged
36
+
37
+ The following models were included in the merge:
38
+ * [fblgit/cybertron-v4-qw7B-MGS](https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS)
39
+ * [bunnycore/QandoraExp-7B-Persona](https://huggingface.co/bunnycore/QandoraExp-7B-Persona)
40
+ * [rombodawg/Rombos-LLM-V2.5-Qwen-7b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-7b)
41
+
42
+ ### Configuration
43
+
44
+ The following YAML configuration was used to produce this model:
45
+
46
+ ```yaml
47
+
48
+ models:
49
+ - model: bunnycore/QandoraExp-7B-Persona
50
+ parameters:
51
+ weight: 0.2
52
+ density: 0.2
53
+ - model: rombodawg/Rombos-LLM-V2.5-Qwen-7b
54
+ parameters:
55
+ weight: 0.4
56
+ density: 0.4
57
+ lambda: 0.9
58
+ - model: fblgit/cybertron-v4-qw7B-MGS
59
+ parameters:
60
+ weight: 0.4
61
+ density: 0.4
62
+ lambda: 0.9
63
+ merge_method: della
64
+ base_model: Qwen/Qwen2.5-7B
65
+ parameters:
66
+ weight: 1
67
+ density: 1
68
+ lambda: 0.9
69
+ int8_mask: true
70
+ dtype: bfloat16
71
+
72
+ ```