70B_unstruct / README.md
schonsense's picture
Update README.md
e55a811 verified
---
base_model:
- meta-llama/Llama-3.3-70B-Instruct
- huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
- meta-llama/Llama-3.1-70B
library_name: transformers
tags:
- mergekit
- merge
---
# 70B_unstruct
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317d4867690c5b55e61ce3d/Rve1WXkbTZyGWnCHXDmMJ.png)
This is an attempt to take llama 3.3 instruct and peel back some of it's instruct following overtraining and positivity. By using 3.3 instruct as the base model and merging in 3.1 base as well as the 3.3 abiliteration this merge subtracts out ~80% of the largest changes between 3.1 and 3.3 at 0.5 weight. In addition, the abiliteration of the refusal pathway is added (or subtracted) back in to the model at 0.5 weight.
In theory this creates a ~75% refusal abliterated model with ~70% of it's instruct following capabilities intact, healed some in addition to having its instruct overtuning rolled back ~50%.
Key points:
# Δw task vector for the abiliteration model is exclusively the opposed refusal vectors.
# Δw task vector for 3.1 base is the reversal of all of the instruct and instruct related tuning to move from 3.1 to 3.3 inst.
### Merge Method
This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using [meta-llama/Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) as a base.
### Models Merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The following models were included in the merge:
* [huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned](https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned)
* [meta-llama/Llama-3.1-70B](https://huggingface.co/meta-llama/Llama-3.1-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
parameters:
density: 0.7
epsilon: 0.2
weight: 0.5
- model: meta-llama/Llama-3.1-70B
parameters:
density: 0.8
epsilon: 0.1
weight: 0.5
- model: meta-llama/Llama-3.3-70B-Instruct
merge_method: della
base_model: meta-llama/Llama-3.3-70B-Instruct
tokenizer_source: meta-llama/Llama-3.3-70B-Instruct
parameters:
normalize: false
int8_mask: false
lambda: 1.0
dtype: float32
out_dtype: bfloat16
```