create
Browse files
README.md
CHANGED
|
@@ -1,3 +1,23 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-2.0
|
| 3 |
+
datasets:
|
| 4 |
+
- teknium/OpenHermes-2.5
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- finance
|
| 9 |
+
- legal
|
| 10 |
+
- biology
|
| 11 |
+
- art
|
| 12 |
---
|
| 13 |
+
|
| 14 |
+
Behold, one of the first fine-tunes of Mistral's 7B 0.2 Base model. SatoshiN is trained on 4 epochs of a diverse custom data-set, combined with a single sanitization round of teknium/OpenHermes2.5.
|
| 15 |
+
It's a nice assistant that isn't afraid to ask questions, and gather additional information before providing a response to user prompts.
|
| 16 |
+
|
| 17 |
+
I have found success using a variety of instruction-formats such as Alpaca, ChatML and Mistral. The custom training was performed on raw-text with the idea that it might acquire better generalization skills.
|
| 18 |
+
|
| 19 |
+
Total model-size has increased from 7.24B to 7.35B after merging a .5GB LoRa via PEFT.
|
| 20 |
+
|
| 21 |
+
SatoshiN | Base-Model
|
| 22 |
+
|
| 23 |
+
Wikitext Perplexity: 6.27 | 5.4
|