grimjim commited on
Commit
f9b5a18
·
verified ·
1 Parent(s): ffc4424

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -22
README.md CHANGED
@@ -1,23 +1,25 @@
1
- ---
2
- license: apache-2.0
3
- language:
4
- - en
5
- - fr
6
- - de
7
- - es
8
- - it
9
- - pt
10
- - ru
11
- - zh
12
- - ja
13
- base_model:
14
- - mistralai/Mistral-Nemo-Instruct-2407
15
- pipeline_tag: text-generation
16
- library_name: transformers
17
- ---
18
-
19
- # mistralai-Mistral-Nemo-Instruct-2407-12B-MPOA-v1
20
-
21
- MPOA (Magnitude-Preserving Othogonalized Ablation, AKA norm-preserving biprojected abliteration) has been applied the majority of layers in this model, but only to mlp.down_proj.weight layers. Unlike conventional abliteration, self_attn.o_proj.weight layers were left untouched.
22
-
 
 
23
  More details to follow.
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - fr
6
+ - de
7
+ - es
8
+ - it
9
+ - pt
10
+ - ru
11
+ - zh
12
+ - ja
13
+ base_model:
14
+ - mistralai/Mistral-Nemo-Instruct-2407
15
+ pipeline_tag: text-generation
16
+ library_name: transformers
17
+ ---
18
+
19
+ # mistralai-Mistral-Nemo-Instruct-2407-12B-MPOA-v1
20
+
21
+ MPOA (Magnitude-Preserving Othogonalized Ablation, AKA norm-preserving biprojected abliteration) has been applied the majority of layers in this model, but only to mlp.down_proj.weight layers. Unlike conventional abliteration, self_attn.o_proj.weight layers were left untouched.
22
+
23
+ Compliance was not maximized for this model. The model appears to be near an edge of chaos with regard to some safety refusals, which should be suitable for varied text completion.
24
+
25
  More details to follow.