schonsense commited on
Commit
34b28c7
·
verified ·
1 Parent(s): 7297b71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -59
README.md CHANGED
@@ -1,59 +1,70 @@
1
- ---
2
- base_model:
3
- - meta-llama/Llama-3.3-70B-Instruct
4
- - huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
5
- - meta-llama/Llama-3.1-70B
6
- library_name: transformers
7
- tags:
8
- - mergekit
9
- - merge
10
-
11
- ---
12
- # 70B_unstruct
13
-
14
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
-
16
- ## Merge Details
17
- ### Merge Method
18
-
19
- This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) as a base.
20
-
21
- ### Models Merged
22
-
23
- The following models were included in the merge:
24
- * [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned)
25
- * [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)
26
-
27
- ### Configuration
28
-
29
- The following YAML configuration was used to produce this model:
30
-
31
- ```yaml
32
- models:
33
-
34
- - model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
35
- parameters:
36
- density: 0.7
37
- epsilon: 0.2
38
- weight: 0.5
39
-
40
- - model: meta-llama/Llama-3.1-70B
41
- parameters:
42
- density: 0.8
43
- epsilon: 0.1
44
- weight: 0.5
45
-
46
- - model: meta-llama/Llama-3.3-70B-Instruct
47
- merge_method: della
48
- base_model: meta-llama/Llama-3.3-70B-Instruct
49
- tokenizer_source: meta-llama/Llama-3.3-70B-Instruct
50
- parameters:
51
- normalize: false
52
- int8_mask: false
53
- lambda: 1.0
54
-
55
-
56
- dtype: float32
57
-
58
- out_dtype: bfloat16
59
- ```
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - meta-llama/Llama-3.3-70B-Instruct
4
+ - huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
5
+ - meta-llama/Llama-3.1-70B
6
+ library_name: transformers
7
+ tags:
8
+ - mergekit
9
+ - merge
10
+
11
+ ---
12
+ # 70B_unstruct
13
+
14
+ This is an attempt to take llama 3.3 instruct and peel back some of it's instruct following overtraining and positivity. By using 3.3 instruct as the base model and merging in 3.1 base as well as the 3.3 abiliteration this merge subtracts out ~80% of the largest changes between 3.1 and 3.3 at 0.5 weight. In addition, the abiliteration of the refusal pathway is added (or subtracted) back in to the model at 0.5 weight.
15
+
16
+ In theory this creates a ~75% refusal abliterated model with ~70% of it's instruct following capabilities intact, healed some in addition to having its instruct overtuning rolled back ~50%.
17
+
18
+ Key points:
19
+ Δw task vector for the abiliteration model is exclusively the opposed refusal vectors.
20
+ Δw task vector for 3.1 base is the reversal of all of the instruct and instruct related tuning to move from 3.1 to 3.3 inst.
21
+
22
+
23
+
24
+
25
+
26
+ ### Merge Method
27
+
28
+ This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) as a base.
29
+
30
+ ### Models Merged
31
+
32
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
33
+
34
+ The following models were included in the merge:
35
+ * [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned)
36
+ * [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)
37
+
38
+ ### Configuration
39
+
40
+ The following YAML configuration was used to produce this model:
41
+
42
+ ```yaml
43
+ models:
44
+
45
+ - model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
46
+ parameters:
47
+ density: 0.7
48
+ epsilon: 0.2
49
+ weight: 0.5
50
+
51
+ - model: meta-llama/Llama-3.1-70B
52
+ parameters:
53
+ density: 0.8
54
+ epsilon: 0.1
55
+ weight: 0.5
56
+
57
+ - model: meta-llama/Llama-3.3-70B-Instruct
58
+ merge_method: della
59
+ base_model: meta-llama/Llama-3.3-70B-Instruct
60
+ tokenizer_source: meta-llama/Llama-3.3-70B-Instruct
61
+ parameters:
62
+ normalize: false
63
+ int8_mask: false
64
+ lambda: 1.0
65
+
66
+
67
+ dtype: float32
68
+
69
+ out_dtype: bfloat16
70
+ ```