chimbiwide's picture
Update README.md
eb3399f verified
metadata
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - gemma3n
license: apache-2.0
language:
  - en
datasets:
  - chimbiwide/pippa_filtered

Gemma3NPC-filtered-float16

The "filtered" model that delivers censored general role-playing at great speed.

We trained this model as a rank-12 LoRA adapter with one epoch over pippa_filtered using a 40GB vRAM A100 in Google Colab. For this run, we employed a learning rate of 2e-5 and a total batch size of 1 and gradient accumulation steps of 16. A cosine learning rate scheduler was used with a 150-step warmup. With a gradient clipping of 0.5.

Check out our training notebook here.


Here is a graph of the Step Training Loss, saved every 5 steps:

image/png