Commit
·
1e8448a
1
Parent(s):
0dd1942
Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-4.0
|
| 3 |
+
pipeline_tag: text-generation
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
### Current State
|
| 7 |
+
|
| 8 |
+
Well, I've reduced per-step loss by 0.36 (from 1.57 to 1.21) in a third of an epoch. To compare, the (meh) 13b Mistral glue LoRA reduced per-step loss by 0.37 (2.16 to 1.79) over an entire 4 epochs!
|
| 9 |
+
|
| 10 |
+
EDIT: First 2 evals both came in at < 1, MUCH better than the 13b attempt. Verdict is that Mistral (or 7Bs in general, I'd guess) can't survive more than one cut, meaningfully. Per-step loss reduced by 0.47 @ 1 epoch.
|
| 11 |
+
|
| 12 |
+
EDIT 2: Halfway there. Eval loss < 0.85 for the last 2 evals, promising. Per-step loss down to ~1.07, a reduction of ~33%!
|
| 13 |
+
|
| 14 |
+
EDIT 3: 80% done. Curve is greatly flattened, so 3 epochs seems like it was the right call. Eval down to 0.81 and per-step loss down to 0.93. Can't wait to test!
|
| 15 |
+
|
| 16 |
+
EDIT 4: Done! Testing time.
|
| 17 |
+
|
| 18 |
+
### Dataset
|
| 19 |
+
|
| 20 |
+
The 11b glue consists of:
|
| 21 |
+
- The entirety of HF No Robots.
|
| 22 |
+
- The entirety of TinyPixel/orca-mini
|
| 23 |
+
- Enough of the GPT-4 generated Alpaca dataset (randomly chosen) to make it a roughly even three-way split.
|