Tarek07 commited on
Commit
6677534
·
verified ·
1 Parent(s): b0a02ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -60
README.md CHANGED
@@ -1,60 +0,0 @@
1
- ---
2
- base_model:
3
- - TareksLab/Malediction-V2-LLaMa-70B
4
- - TareksLab/Wordsmith-V9-LLaMa-70B
5
- - TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B
6
- - TareksLab/Stylizer-V2-LLaMa-70B
7
- library_name: transformers
8
- tags:
9
- - mergekit
10
- - merge
11
-
12
- ---
13
- # merge
14
-
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
-
17
- ## Merge Details
18
- ### Merge Method
19
-
20
- This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/Stylizer-V2-LLaMa-70B](https://huggingface.co/TareksLab/Stylizer-V2-LLaMa-70B) as a base.
21
-
22
- ### Models Merged
23
-
24
- The following models were included in the merge:
25
- * [TareksLab/Malediction-V2-LLaMa-70B](https://huggingface.co/TareksLab/Malediction-V2-LLaMa-70B)
26
- * [TareksLab/Wordsmith-V9-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V9-LLaMa-70B)
27
- * [TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B](https://huggingface.co/TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B)
28
-
29
- ### Configuration
30
-
31
- The following YAML configuration was used to produce this model:
32
-
33
- ```yaml
34
- models:
35
- - model: TareksLab/Wordsmith-V9-LLaMa-70B
36
- parameters:
37
- weight: 0.25
38
- density: 0.5
39
- - model: TareksLab/Malediction-V2-LLaMa-70B
40
- parameters:
41
- weight: 0.25
42
- density: 0.5
43
- - model: TareksLab/Dungeons-and-Dragons-V1.2-LLaMa-70B
44
- parameters:
45
- weight: 0.25
46
- density: 0.5
47
- - model: TareksLab/Stylizer-V2-LLaMa-70B
48
- parameters:
49
- weight: 0.25
50
- density: 0.5
51
- merge_method: dare_ties
52
- base_model: TareksLab/Stylizer-V2-LLaMa-70B
53
- parameters:
54
- normalize: false
55
- out_dtype: bfloat16
56
- chat_template: llama3
57
- tokenizer:
58
- source: base
59
-
60
- ```