Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +55 -42
README.md CHANGED
@@ -1,42 +1,55 @@
1
- ---
2
- base_model:
3
- - prithivMLmods/QwQ-LCoT-7B-Instruct
4
- - Qwen/Qwen2.5-7B-Instruct
5
- - prithivMLmods/QwQ-LCoT2-7B-Instruct
6
- library_name: transformers
7
- tags:
8
- - mergekit
9
- - merge
10
-
11
- ---
12
- # **QwQ-LCoT1-Merged**
13
-
14
- The QwQ-LCoT-7B-Instruct is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the Qwen2.5-7B base model and has been fine-tuned on the chain of thought reasoning datasets, focusing on chain-of-thought (CoT) reasoning for problems. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
15
-
16
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
-
18
- ### Merge Method
19
-
20
- This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) as a base.
21
-
22
- ### Models Merged
23
-
24
- The following models were included in the merge:
25
- * [prithivMLmods/QwQ-LCoT-7B-Instruct](https://huggingface.co/prithivMLmods/QwQ-LCoT-7B-Instruct)
26
- * [prithivMLmods/QwQ-LCoT2-7B-Instruct](https://huggingface.co/prithivMLmods/QwQ-LCoT2-7B-Instruct)
27
-
28
- ### Configuration
29
-
30
- The following YAML configuration was used to produce this model:
31
-
32
- ```yaml
33
- models:
34
- - model: prithivMLmods/QwQ-LCoT2-7B-Instruct
35
- - model: prithivMLmods/QwQ-LCoT-7B-Instruct
36
- merge_method: model_stock
37
- base_model: Qwen/Qwen2.5-7B-Instruct
38
- normalize: true
39
- int8_mask: true
40
- dtype: bfloat16
41
-
42
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - prithivMLmods/QwQ-LCoT-7B-Instruct
4
+ - Qwen/Qwen2.5-7B-Instruct
5
+ - prithivMLmods/QwQ-LCoT2-7B-Instruct
6
+ library_name: transformers
7
+ tags:
8
+ - mergekit
9
+ - merge
10
+ language:
11
+ - zho
12
+ - eng
13
+ - fra
14
+ - spa
15
+ - por
16
+ - deu
17
+ - ita
18
+ - rus
19
+ - jpn
20
+ - kor
21
+ - vie
22
+ - tha
23
+ - ara
24
+ ---
25
+ # **QwQ-LCoT1-Merged**
26
+
27
+ The QwQ-LCoT-7B-Instruct is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the Qwen2.5-7B base model and has been fine-tuned on the chain of thought reasoning datasets, focusing on chain-of-thought (CoT) reasoning for problems. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
28
+
29
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
30
+
31
+ ### Merge Method
32
+
33
+ This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) as a base.
34
+
35
+ ### Models Merged
36
+
37
+ The following models were included in the merge:
38
+ * [prithivMLmods/QwQ-LCoT-7B-Instruct](https://huggingface.co/prithivMLmods/QwQ-LCoT-7B-Instruct)
39
+ * [prithivMLmods/QwQ-LCoT2-7B-Instruct](https://huggingface.co/prithivMLmods/QwQ-LCoT2-7B-Instruct)
40
+
41
+ ### Configuration
42
+
43
+ The following YAML configuration was used to produce this model:
44
+
45
+ ```yaml
46
+ models:
47
+ - model: prithivMLmods/QwQ-LCoT2-7B-Instruct
48
+ - model: prithivMLmods/QwQ-LCoT-7B-Instruct
49
+ merge_method: model_stock
50
+ base_model: Qwen/Qwen2.5-7B-Instruct
51
+ normalize: true
52
+ int8_mask: true
53
+ dtype: bfloat16
54
+
55
+ ```