Nemo-Instruct-2407-MPOA-v4-12B
MPOA (Magnitude-Preserving Othogonalized Ablation, AKA norm-preserving biprojected abliteration) has been applied to layers 10-34 in this model, to both mlp.down_proj.weight and self_attn.o_proj.weight streams.
Compliance was not maximized for this model. The model appears to be near an edge of chaos with regard to some safety refusals, which should be suitable for varied text completion.
The harmless/baseline set used contains Chinese, English, and French prompts; the harmful/contrast set contains Chinese, English, and French prompts. English text generation remains coherent.
An experiment with adding German prompts to both sets, for a fourth language in total, resulted in a model with excessive repetition, and will not be released.
- Downloads last month
- 3