|
|
--- |
|
|
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit |
|
|
tags: |
|
|
- text-generation-inference |
|
|
- transformers |
|
|
- unsloth |
|
|
- gemma3n |
|
|
license: mpl-2.0 |
|
|
language: |
|
|
- en |
|
|
datasets: |
|
|
- acon96/Home-Assistant-Requests |
|
|
--- |
|
|
|
|
|
# HomeGem4Bn |
|
|
- **Developed by:** TitleOS |
|
|
- **License:** mpl-2.0 |
|
|
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit |
|
|
|
|
|
### HomeGem is a first generation finetune of Gemma 3n e4b IT, for use as a conversational LLM controlling your Home Assistant environment. Finetuning was performed on an A100 SXM4, and finetuned using the acon96/Home-Assistant-Requests dataset. |
|
|
|
|
|
## Eval Results |
|
|
``` |
|
|
Step 5: Displaying post-training statistics and evaluation loss... |
|
|
2797.18 seconds used for training. |
|
|
Peak reserved memory: 17.799 GB. |
|
|
Peak memory for training: 8.42 GB. |
|
|
|
|
|
--- Evaluation Loss Progression --- |
|
|
Step 20: Eval Loss = 2.0582 |
|
|
Step 40: Eval Loss = 1.6810 |
|
|
Step 60: Eval Loss = 1.5477 |
|
|
Step 80: Eval Loss = 1.4737 |
|
|
Step 100: Eval Loss = 1.4113 |
|
|
Step 120: Eval Loss = 1.3982 |
|
|
Step 140: Eval Loss = 1.4340 |
|
|
Step 160: Eval Loss = 1.4325 |
|
|
Step 180: Eval Loss = 1.4424 |
|
|
Step 200: Eval Loss = 1.4442 |
|
|
|
|
|
--- Summary --- |
|
|
Initial Eval Loss: 2.0582 |
|
|
Final Eval Loss: 1.4442 |
|
|
Improvement: 0.6140 |
|
|
--------------------------------- |
|
|
``` |
|
|
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |