Update README.md
Browse files
README.md
CHANGED
|
@@ -11,6 +11,8 @@ tags:
|
|
| 11 |
---
|
| 12 |
# 70B_unstruct
|
| 13 |
|
|
|
|
|
|
|
| 14 |
This is an attempt to take llama 3.3 instruct and peel back some of it's instruct following overtraining and positivity. By using 3.3 instruct as the base model and merging in 3.1 base as well as the 3.3 abiliteration this merge subtracts out ~80% of the largest changes between 3.1 and 3.3 at 0.5 weight. In addition, the abiliteration of the refusal pathway is added (or subtracted) back in to the model at 0.5 weight.
|
| 15 |
|
| 16 |
In theory this creates a ~75% refusal abliterated model with ~70% of it's instruct following capabilities intact, healed some in addition to having its instruct overtuning rolled back ~50%.
|
|
|
|
| 11 |
---
|
| 12 |
# 70B_unstruct
|
| 13 |
|
| 14 |
+

|
| 15 |
+
|
| 16 |
This is an attempt to take llama 3.3 instruct and peel back some of it's instruct following overtraining and positivity. By using 3.3 instruct as the base model and merging in 3.1 base as well as the 3.3 abiliteration this merge subtracts out ~80% of the largest changes between 3.1 and 3.3 at 0.5 weight. In addition, the abiliteration of the refusal pathway is added (or subtracted) back in to the model at 0.5 weight.
|
| 17 |
|
| 18 |
In theory this creates a ~75% refusal abliterated model with ~70% of it's instruct following capabilities intact, healed some in addition to having its instruct overtuning rolled back ~50%.
|