schonsense commited on
Commit
90f3557
·
verified ·
1 Parent(s): 25a8209

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -43
README.md CHANGED
@@ -1,43 +1,45 @@
1
- ---
2
- base_model:
3
- - Tarek07/Scripturient-V1.3-LLaMa-70B
4
- library_name: transformers
5
- tags:
6
- - mergekit
7
- - merge
8
-
9
- ---
10
- # Script_brain
11
-
12
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
-
14
- ## Merge Details
15
- ### Merge Method
16
-
17
- This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
18
-
19
- ### Models Merged
20
-
21
- The following models were included in the merge:
22
- * D:\mergekit\LORAs\applied\Script_r128
23
- * [Tarek07/Scripturient-V1.3-LLaMa-70B](https://huggingface.co/Tarek07/Scripturient-V1.3-LLaMa-70B)
24
-
25
- ### Configuration
26
-
27
- The following YAML configuration was used to produce this model:
28
-
29
- ```yaml
30
- models:
31
- - model: D:\mergekit\LORAs\applied\Script_r128
32
- - model: Tarek07/Scripturient-V1.3-LLaMa-70B
33
- merge_method: slerp
34
- base_model: Tarek07/Scripturient-V1.3-LLaMa-70B
35
- dtype: float32
36
- out_dtype: bfloat16
37
- parameters:
38
- t: [
39
- 0.3, 0.8, 0.9, 0.9, 0.7, 0.7, 0.2
40
- ]
41
-
42
-
43
- ```
 
 
 
1
+ ---
2
+ base_model:
3
+ - Tarek07/Scripturient-V1.3-LLaMa-70B
4
+ library_name: transformers
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+
9
+ ---
10
+ # Script_brain
11
+
12
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317d4867690c5b55e61ce3d/SEnHjTfquvN3xFtaB4ubX.png)
13
+
14
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
+
16
+ ## Merge Details
17
+ ### Merge Method
18
+
19
+ This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
20
+
21
+ ### Models Merged
22
+
23
+ The following models were included in the merge:
24
+ * D:\mergekit\LORAs\applied\Script_r128
25
+ * [Tarek07/Scripturient-V1.3-LLaMa-70B](https://huggingface.co/Tarek07/Scripturient-V1.3-LLaMa-70B)
26
+
27
+ ### Configuration
28
+
29
+ The following YAML configuration was used to produce this model:
30
+
31
+ ```yaml
32
+ models:
33
+ - model: D:\mergekit\LORAs\applied\Script_r128
34
+ - model: Tarek07/Scripturient-V1.3-LLaMa-70B
35
+ merge_method: slerp
36
+ base_model: Tarek07/Scripturient-V1.3-LLaMa-70B
37
+ dtype: float32
38
+ out_dtype: bfloat16
39
+ parameters:
40
+ t: [
41
+ 0.3, 0.8, 0.9, 0.9, 0.7, 0.7, 0.2
42
+ ]
43
+
44
+
45
+ ```