Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: other
|
| 3 |
---
|
|
|
|
| 1 |
+
An experiment with gradient merges using [the following script](https://github.com/TehVenomm/LM_Transformers_BlockMerge), with [Chronos](https://huggingface.co/elinas/chronos-13b) as its primary model, augmented by [Hermes](https://huggingface.co/NousResearch/Nous-Hermes-13b) and [Wizard-Vicuna Uncensored](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-HF).
|
| 2 |
+
|
| 3 |
+
Chronos is a wonderful model, though doesn't feel very smart. Hermes and WizardLM have been merged gradually, primarily in the higher layers (10+) in an attempt to rectify some of this behaviour without affecting Chronos' lengthy replies.
|
| 4 |
+
I'd say the end product is about 60% Chronos, with 20% Hermes and 20% Wizard added in gradually increasing amounts.
|
| 5 |
+
|
| 6 |
+
A 4_K_M quant has been included for convenience sake. Happy experimenting!
|
| 7 |
+
|
| 8 |
+
This model primarily uses Alpaca formatting, so for optimal model performance, use:
|
| 9 |
+
```
|
| 10 |
+
### Instruction:
|
| 11 |
+
Your instruction or question here.
|
| 12 |
+
### Response:
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
---
|
| 16 |
license: other
|
| 17 |
---
|