GemmaReLe-float16 / README.md
chimbiwide's picture
Update README.md
bd5934b verified
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
datasets:
- chimbiwide/ReLe_NPC
---
# GemmaReLe
### A test model for ReLe
![image](https://cdn-uploads.huggingface.co/production/uploads/67d5b5a056a9d31aa0b49687/f15iqICkVSxDfvmbNbD38.png)
(btw, we sort of did)
---
The **Gemma3NPC** series were meant to be a sort of general purposed video game RP model.
This time, using the [chimbiwide/ReLe_Synthetic_v1_json](https://huggingface.co/datasets/chimbiwide/ReLe_Synthetic_v1_json), we trained a model to specifically act as ReLe.
For more information on *Who is ReLe?*, visit the dataset README.
***Warning***: This is **NOT** a general purpose model. Performance requires further testing.
Check out our training notebook [here](https://github.com/chimbiwide/Gemma3NPC/blob/main/Training/GemmaReLe.ipynb)
---
#### Training parameters compared to `Gemma3NPC-it`
| Parameter | Gemma3NPC-it | Gemma3NPC-it-beta |
| --- | --- | --- |
| Learning Rate | 2e-5 | 2.5e-5 (+25%) |
| Warmup Steps | 800 | 100 |
| gradient clipping | 0.4 | 1.0 |
---
#### Graph of the Step Training Loss, saved every 10 steps:
![chart](https://cdn-uploads.huggingface.co/production/uploads/67d5b5a056a9d31aa0b49687/tpMFpzRY6NGF9txpTAl8k.png)
---
#### Fun Discovery
For the first time ever, we encountered a whole number trianing loss.
![image](https://cdn-uploads.huggingface.co/production/uploads/67d5b5a056a9d31aa0b49687/AOtzf66FgUmfaYJZy1TQ1.png)