File size: 6,280 Bytes
c8f983f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 | ---
viewer: false
tags:
- uv-script
- unsloth
- training
- hf-jobs
- vlm
- fine-tuning
---
# 🦥 Unsloth Training Scripts for HF Jobs
UV scripts for fine-tuning LLMs and VLMs using [Unsloth](https://github.com/unslothai/unsloth) on [HF Jobs](https://huggingface.co/docs/hub/jobs) (on-demand cloud GPUs). UV handles dependency installation automatically, so you can run these scripts directly without any local setup.
These scripts can also be used or adapted by agents to train models for you.
## Prerequisites
- A Hugging Face account
- The [HF CLI](https://huggingface.co/docs/huggingface_hub/main/en/guides/cli) installed and authenticated (`hf auth login`)
- A dataset on the Hub in the appropriate format (see format requirements below). A strong LLM agent can often convert your data into the right format if needed.
## Data Formats
### LLM Fine-tuning (SFT)
Requires conversation data in ShareGPT or similar format:
```python
{
"messages": [
{"from": "human", "value": "What is the capital of France?"},
{"from": "gpt", "value": "The capital of France is Paris."}
]
}
```
The script auto-converts common formats (ShareGPT, Alpaca, etc.) via `standardize_data_formats`. See [mlabonne/FineTome-100k](https://huggingface.co/datasets/mlabonne/FineTome-100k) for a working dataset example.
### VLM Fine-tuning
Requires `images` and `messages` columns:
```python
{
"images": [<PIL.Image>], # List of images
"messages": [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What's in this image?"}
]
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "A golden retriever playing fetch in a park."}
]
}
]
}
```
See [davanstrien/iconclass-vlm-sft](https://huggingface.co/datasets/davanstrien/iconclass-vlm-sft) for a working dataset example, and [davanstrien/iconclass-vlm-qwen3-best](https://huggingface.co/davanstrien/iconclass-vlm-qwen3-best) for a model trained with these scripts.
### Continued Pretraining
Any dataset with a text column:
```python
{"text": "Your domain-specific text here..."}
```
Use `--text-column` if your column has a different name.
## Usage
View available options for any script:
```bash
uv run https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-lfm2.5.py --help
```
### LLM fine-tuning
Fine-tune [LFM2.5-1.2B-Instruct](https://huggingface.co/LiquidAI/LFM2.5-1.2B-Instruct), a compact and efficient text model from Liquid AI:
```bash
hf jobs uv run \
https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-lfm2.5.py \
--flavor a10g-small --secrets HF_TOKEN --timeout 4h \
-- --dataset mlabonne/FineTome-100k \
--num-epochs 1 \
--eval-split 0.2 \
--output-repo your-username/lfm-finetuned
```
### VLM fine-tuning
```bash
hf jobs uv run \
https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-qwen3-vl.py \
--flavor a100-large --secrets HF_TOKEN \
-- --dataset your-username/dataset \
--trackio-space your-username/trackio \
--output-repo your-username/my-model
```
### Continued pretraining
```bash
hf jobs uv run \
https://huggingface.co/datasets/unsloth/jobs/raw/main/continued-pretraining.py \
--flavor a100-large --secrets HF_TOKEN \
-- --dataset your-username/domain-corpus \
--text-column content \
--max-steps 1000 \
--output-repo your-username/domain-llm
```
### With Trackio monitoring
```bash
hf jobs uv run \
https://huggingface.co/datasets/unsloth/jobs/raw/main/sft-lfm2.5.py \
--flavor a10g-small --secrets HF_TOKEN \
-- --dataset mlabonne/FineTome-100k \
--trackio-space your-username/trackio \
--output-repo your-username/lfm-finetuned
```
## Scripts
| Script | Base Model | Task |
| ------------------------------------------------------ | -------------------- | ----------------------------- |
| [`sft-lfm2.5.py`](sft-lfm2.5.py) | LFM2.5-1.2B-Instruct | LLM fine-tuning (recommended) |
| [`sft-qwen3-vl.py`](sft-qwen3-vl.py) | Qwen3-VL-8B | VLM fine-tuning |
| [`sft-gemma3-vlm.py`](sft-gemma3-vlm.py) | Gemma 3 4B | VLM fine-tuning (smaller) |
| [`continued-pretraining.py`](continued-pretraining.py) | Qwen3-0.6B | Domain adaptation |
## Common Options
| Option | Description | Default |
| ------------------------- | -------------------------------------- | ------------ |
| `--dataset` | HF dataset ID | _required_ |
| `--output-repo` | Where to save trained model | _required_ |
| `--max-steps` | Number of training steps | 500 |
| `--num-epochs` | Train for N epochs instead of steps | - |
| `--eval-split` | Fraction for evaluation (e.g., 0.2) | 0 (disabled) |
| `--batch-size` | Per-device batch size | 2 |
| `--gradient-accumulation` | Gradient accumulation steps | 4 |
| `--lora-r` | LoRA rank | 16 |
| `--learning-rate` | Learning rate | 2e-4 |
| `--merge-model` | Upload merged model (not just adapter) | false |
| `--trackio-space` | HF Space for live monitoring | - |
| `--run-name` | Custom name for Trackio run | auto |
## Tips
- Use `--max-steps 10` to verify everything works before a full run
- `--eval-split 0.1` helps detect overfitting
- Run `hf jobs hardware` to see GPU pricing (A100-large ~$2.50/hr, L40S ~$1.80/hr)
- Add `--streaming` for very large datasets
- First training step may take a few minutes (CUDA kernel compilation)
## Links
- [HF Jobs Quickstart](https://huggingface.co/docs/hub/jobs-quickstart)
- [Unsloth Documentation](https://docs.unsloth.ai/)
- [UV Scripts Guide](https://docs.astral.sh/uv/guides/scripts/)
|