File size: 986 Bytes
bb15e2c | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 | ---
library_name: transformers
base_model: google/gemma-3-270m-it
tags: [solo, fine-tuned, lora, unsloth]
datasets: [GetSoloTech/Code-Reasoning]
pipeline_tag: text-generation
---
<a href="https://hub.getsolo.tech"><img src="https://raw.githubusercontent.com/GetSoloTech/solo-cli/main/media/solo-banner.png" alt="Solo" width="200"></a>
## Model Details
| | |
|---|---|
| **Base Model** | [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it) |
| **Method** | LoRA (PEFT) |
| **Parameters** | 0.27B |
## Training Hyperparameters
| | |
|---|---|
| **Epochs** | 1 |
| **Max Steps** | 100 |
| **Batch Size** | 4 |
| **Gradient Accumulation** | 4 |
| **Learning Rate** | 0.0002 |
| **LoRA r** | 4 |
| **LoRA Alpha** | 4 |
| **Max Sequence Length** | 2048 |
| **Training Duration** | 41m 11s |
## Dataset
[GetSoloTech/Code-Reasoning](https://huggingface.co/datasets/GetSoloTech/Code-Reasoning)
---
<sub>Trained with <a href="https://hub.getsolo.tech">Solo</a></sub>
|