Update README.md
Browse files
README.md
CHANGED
|
@@ -12,10 +12,6 @@ tags:
|
|
| 12 |
Behold, one of the first fine-tunes of Mistral's 7B 0.2 Base model. SatoshiN is trained on 4 epochs 2e-4 learning rate (cosine) of a diverse custom data-set, combined with a polishing round of that same data-set at a 1e-4 linear learning rate.
|
| 13 |
It's a nice assistant that isn't afraid to ask questions, and gather additional information before providing a response to user prompts.
|
| 14 |
|
| 15 |
-
I have found varying success using instruction-formats such as Alpaca, ChatML and Mistral. The custom training was performed on raw-text with the idea that it might acquire better generalization skills.
|
| 16 |
-
|
| 17 |
-
Total model-size has increased from 7.24B to 7.35B after merging a .5GB LoRa via PEFT.
|
| 18 |
-
|
| 19 |
SatoshiN | Base-Model
|
| 20 |
|
| 21 |
Wikitext Perplexity: 6.27 | 5.4
|
|
|
|
| 12 |
Behold, one of the first fine-tunes of Mistral's 7B 0.2 Base model. SatoshiN is trained on 4 epochs 2e-4 learning rate (cosine) of a diverse custom data-set, combined with a polishing round of that same data-set at a 1e-4 linear learning rate.
|
| 13 |
It's a nice assistant that isn't afraid to ask questions, and gather additional information before providing a response to user prompts.
|
| 14 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
SatoshiN | Base-Model
|
| 16 |
|
| 17 |
Wikitext Perplexity: 6.27 | 5.4
|