Update README.md
Browse files
README.md
CHANGED
|
@@ -8,14 +8,34 @@ tags:
|
|
| 8 |
license: apache-2.0
|
| 9 |
language:
|
| 10 |
- en
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
-
#
|
| 14 |
|
| 15 |
-
|
| 16 |
-
- **License:** apache-2.0
|
| 17 |
-
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
|
| 18 |
|
| 19 |
-
|
| 20 |
|
| 21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
license: apache-2.0
|
| 9 |
language:
|
| 10 |
- en
|
| 11 |
+
datasets:
|
| 12 |
+
- chimbiwide/RolePlay-NPC
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Gemma3NPC-it-beta
|
| 16 |
|
| 17 |
+
#### A test model with less convervative training parameters
|
|
|
|
|
|
|
| 18 |
|
| 19 |
+
As mentioned in our [original article](https://huggingface.co/blog/chimbiwide/gemma3npc), we employed a very conservative training parameters for Gemma3NPC
|
| 20 |
|
| 21 |
+
Ever since then, we have always wanted to test the performance of the model when we make the training parameters less conservative.
|
| 22 |
+
|
| 23 |
+
So we present ***Gemma3NPC-it-beta***.
|
| 24 |
+
|
| 25 |
+
Check out our training notebook [here](https://github.com/chimbiwide/Gemma3NPC/blob/main/Training/Gemma3NPC_Instruct_Beta.ipynb)
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
#### Training parameters compared to `Gemma3NPC-it`
|
| 30 |
+
|
| 31 |
+
| Parameter | Gemma3NPC-it | Gemma3NPC-it-beta |
|
| 32 |
+
| --- | --- | --- |
|
| 33 |
+
| Learning Rate | 2e-5 | 2.5e-5 (+25%) |
|
| 34 |
+
| Warmup Steps | 800 | 100 |
|
| 35 |
+
| gradient clipping | 0.4 | 1.0 |
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
Here is a graph of the Step Training Loss, saved every 10 steps:
|
| 40 |
+
|
| 41 |
+

|