Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ The following mergekit config was used:
|
|
| 16 |
```
|
| 17 |
slices:
|
| 18 |
- sources:
|
| 19 |
-
- model: ./
|
| 20 |
layer_range: [0, 32]
|
| 21 |
- sources:
|
| 22 |
- model: ./Llama-3-8B-Instruct-DADA
|
|
@@ -50,7 +50,7 @@ Unlike in the case of Libra-19B this models moral alignment seems very much inta
|
|
| 50 |
|
| 51 |
In order to get the best results from this model you should uncheck "skip special tokens" on your front-end and add "<|eot_id|>" to your custom stopping strings.
|
| 52 |
|
| 53 |
-
It has been tested with a
|
| 54 |
|
| 55 |
It regained its base assistant personality during the retraining process, however, using assistant style prompt templates and assistant cards in SillyTavern gives it fairly interesting replies.
|
| 56 |
|
|
|
|
| 16 |
```
|
| 17 |
slices:
|
| 18 |
- sources:
|
| 19 |
+
- model: ./Poppy_Porpoise-DADA-8B
|
| 20 |
layer_range: [0, 32]
|
| 21 |
- sources:
|
| 22 |
- model: ./Llama-3-8B-Instruct-DADA
|
|
|
|
| 50 |
|
| 51 |
In order to get the best results from this model you should uncheck "skip special tokens" on your front-end and add "<|eot_id|>" to your custom stopping strings.
|
| 52 |
|
| 53 |
+
It has been tested with a number of different Llama-3 prompt templates and seems to work well.
|
| 54 |
|
| 55 |
It regained its base assistant personality during the retraining process, however, using assistant style prompt templates and assistant cards in SillyTavern gives it fairly interesting replies.
|
| 56 |
|