mllm-dev commited on
Commit
6905f7b
·
verified ·
1 Parent(s): a801b35

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +6 -6
  2. mergekit_config.yml +1 -1
README.md CHANGED
@@ -1,10 +1,10 @@
1
  ---
2
  base_model:
3
- - mllm-dev/gpt2_f_experiment_1
4
  - mllm-dev/gpt2_f_experiment_0
5
  - mllm-dev/gpt2_f_experiment_2
6
- - mllm-dev/gpt2_f_experiment_4
7
  - mllm-dev/gpt2_f_experiment_3
 
 
8
  library_name: transformers
9
  tags:
10
  - mergekit
@@ -18,15 +18,15 @@ This is a merge of pre-trained language models created using [mergekit](https://
18
  ## Merge Details
19
  ### Merge Method
20
 
21
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mllm-dev/gpt2_f_experiment_0](https://huggingface.co/mllm-dev/gpt2_f_experiment_0) as a base.
22
 
23
  ### Models Merged
24
 
25
  The following models were included in the merge:
26
- * [mllm-dev/gpt2_f_experiment_1](https://huggingface.co/mllm-dev/gpt2_f_experiment_1)
27
  * [mllm-dev/gpt2_f_experiment_2](https://huggingface.co/mllm-dev/gpt2_f_experiment_2)
28
- * [mllm-dev/gpt2_f_experiment_4](https://huggingface.co/mllm-dev/gpt2_f_experiment_4)
29
  * [mllm-dev/gpt2_f_experiment_3](https://huggingface.co/mllm-dev/gpt2_f_experiment_3)
 
 
30
 
31
  ### Configuration
32
 
@@ -37,7 +37,7 @@ base_model:
37
  model:
38
  path: mllm-dev/gpt2_f_experiment_0
39
  dtype: float16
40
- merge_method: dare_ties
41
  parameters:
42
  int8_mask: 1.0
43
  normalize: 1.0
 
1
  ---
2
  base_model:
 
3
  - mllm-dev/gpt2_f_experiment_0
4
  - mllm-dev/gpt2_f_experiment_2
 
5
  - mllm-dev/gpt2_f_experiment_3
6
+ - mllm-dev/gpt2_f_experiment_4
7
+ - mllm-dev/gpt2_f_experiment_1
8
  library_name: transformers
9
  tags:
10
  - mergekit
 
18
  ## Merge Details
19
  ### Merge Method
20
 
21
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mllm-dev/gpt2_f_experiment_0](https://huggingface.co/mllm-dev/gpt2_f_experiment_0) as a base.
22
 
23
  ### Models Merged
24
 
25
  The following models were included in the merge:
 
26
  * [mllm-dev/gpt2_f_experiment_2](https://huggingface.co/mllm-dev/gpt2_f_experiment_2)
 
27
  * [mllm-dev/gpt2_f_experiment_3](https://huggingface.co/mllm-dev/gpt2_f_experiment_3)
28
+ * [mllm-dev/gpt2_f_experiment_4](https://huggingface.co/mllm-dev/gpt2_f_experiment_4)
29
+ * [mllm-dev/gpt2_f_experiment_1](https://huggingface.co/mllm-dev/gpt2_f_experiment_1)
30
 
31
  ### Configuration
32
 
 
37
  model:
38
  path: mllm-dev/gpt2_f_experiment_0
39
  dtype: float16
40
+ merge_method: ties
41
  parameters:
42
  int8_mask: 1.0
43
  normalize: 1.0
mergekit_config.yml CHANGED
@@ -2,7 +2,7 @@ base_model:
2
  model:
3
  path: mllm-dev/gpt2_f_experiment_0
4
  dtype: float16
5
- merge_method: dare_ties
6
  parameters:
7
  int8_mask: 1.0
8
  normalize: 1.0
 
2
  model:
3
  path: mllm-dev/gpt2_f_experiment_0
4
  dtype: float16
5
+ merge_method: ties
6
  parameters:
7
  int8_mask: 1.0
8
  normalize: 1.0