Update README.md
Browse files
README.md
CHANGED
|
@@ -8,14 +8,20 @@ tags:
|
|
| 8 |
license: apache-2.0
|
| 9 |
language:
|
| 10 |
- en
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
-
#
|
| 14 |
|
| 15 |
-
-
|
| 16 |
-
- **License:** apache-2.0
|
| 17 |
-
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
|
| 18 |
|
| 19 |
-
|
| 20 |
|
| 21 |
-
[
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
license: apache-2.0
|
| 9 |
language:
|
| 10 |
- en
|
| 11 |
+
datasets:
|
| 12 |
+
- chimbiwide/pippa_filtered
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Gemma3NPC-filtered-float16
|
| 16 |
|
| 17 |
+
#### The "filtered" model that delivers censored general role-playing at great speed.
|
|
|
|
|
|
|
| 18 |
|
| 19 |
+
We trained this model as a rank-12 LoRA adapter with one epoch over `pippa_filtered` using a 40GB vRAM A100 in Google Colab. For this run, we employed a learning rate of `2e-5` and a total batch size of 1 and gradient accumulation steps of 16. A cosine learning rate scheduler was used with a 150-step warmup. With a gradient clipping of 0.5.
|
| 20 |
|
| 21 |
+
Check out our training notebook [here](https://github.com/chimbiwide/Gemma3NPC/blob/main/Training/Gemma3NPC-Filtered.ipynb).
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
Here is a graph of the Step Training Loss, saved every 5 steps:
|
| 26 |
+
|
| 27 |
+

|