datatab commited on
Commit
fcadf80
·
verified ·
1 Parent(s): d285b2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -3,17 +3,17 @@ tags:
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
- - datatab/YugoGPT-Alpaca-v1-epoch1-good
7
  - FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
8
  base_model:
9
- - datatab/YugoGPT-Alpaca-v1-epoch1-good
10
  - FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
11
  ---
12
 
13
  # YugoGPT-Alpaca-merged-Mistral-alpaca
14
 
15
  YugoGPT-Alpaca-merged-Mistral-alpaca is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
- * [datatab/YugoGPT-Alpaca-v1-epoch1-good](https://huggingface.co/datatab/YugoGPT-Alpaca-v1-epoch1-good)
17
  * [FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin](https://huggingface.co/FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin)
18
 
19
  ## 🧩 Configuration
@@ -21,12 +21,12 @@ YugoGPT-Alpaca-merged-Mistral-alpaca is a merge of the following models using [L
21
  ```yaml
22
  slices:
23
  - sources:
24
- - model: datatab/YugoGPT-Alpaca-v1-epoch1-good
25
  layer_range: [0, 32]
26
  - model: FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
27
  layer_range: [0, 32]
28
  merge_method: slerp
29
- base_model: datatab/YugoGPT-Alpaca-v1-epoch1-good
30
  parameters:
31
  t:
32
  - filter: self_attn
 
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
+ - datatab/YugoGPT-Alpaca-v1
7
  - FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
8
  base_model:
9
+ - datatab/YugoGPT-Alpaca-v1
10
  - FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
11
  ---
12
 
13
  # YugoGPT-Alpaca-merged-Mistral-alpaca
14
 
15
  YugoGPT-Alpaca-merged-Mistral-alpaca is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
+ * [datatab/YugoGPT-Alpaca-v1](https://huggingface.co/datatab/YugoGPT-Alpaca-v1)
17
  * [FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin](https://huggingface.co/FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin)
18
 
19
  ## 🧩 Configuration
 
21
  ```yaml
22
  slices:
23
  - sources:
24
+ - model: datatab/YugoGPT-Alpaca-v1
25
  layer_range: [0, 32]
26
  - model: FlexingD/yarn-mistral-7B-64k-instruct-alpaca-cleaned-origin
27
  layer_range: [0, 32]
28
  merge_method: slerp
29
+ base_model: datatab/YugoGPT-Alpaca-v1
30
  parameters:
31
  t:
32
  - filter: self_attn