---
library_name: transformers
base_model: Qwen/Qwen3-0.6B
tags: [solo, fine-tuned, lora, unsloth]
datasets: [GetSoloTech/Code-Reasoning]
pipeline_tag: text-generation
---
## Model Details
| | |
|---|---|
| **Base Model** | [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) |
| **Method** | LoRA (PEFT) |
| **Parameters** | 0.6B |
## Training Hyperparameters
| | |
|---|---|
| **Epochs** | 1 |
| **Max Steps** | 100 |
| **Batch Size** | 4 |
| **Gradient Accumulation** | 4 |
| **Learning Rate** | 0.0002 |
| **LoRA r** | 4 |
| **LoRA Alpha** | 4 |
| **Max Sequence Length** | 2048 |
| **Training Duration** | 8m 56s |
## Dataset
[GetSoloTech/Code-Reasoning](https://huggingface.co/datasets/GetSoloTech/Code-Reasoning)
---
Trained with Solo