File size: 6,204 Bytes
a942e46 3eb3121 a942e46 3eb3121 a942e46 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 f362feb 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 eb70165 3eb3121 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 |
---
viewer: false
tags:
- uv-script
- training
- unsloth
- streaming
- fine-tuning
- llm
---
# Streaming LLM Training with Unsloth
Train on massive datasets without downloading anything - data streams directly from the Hub.
## 🦥 Latin LLM Example
Teaches Qwen Latin using 1.47M texts from FineWeb-2, streamed directly from the Hub.
**Blog post:** [Train on Massive Datasets Without Downloading](https://danielvanstrien.xyz/posts/2026/hf-streaming-unsloth/train-massive-datasets-without-downloading.html)
### Quick Start
```bash
# Run on HF Jobs (recommended - 2x faster streaming)
hf jobs uv run latin-llm-streaming.py \
--flavor a100-large \
--timeout 2h \
--secrets HF_TOKEN \
-- \
--max-steps 500 \
--output-repo your-username/qwen-latin
# Run locally
uv run latin-llm-streaming.py \
--max-steps 100 \
--output-repo your-username/qwen-latin-test
```
### Why Streaming?
- **No disk space needed** - train on TB-scale datasets without downloading
- **Works everywhere** - Colab, Kaggle, HF Jobs
- **Any language** - FineWeb-2 has 90+ languages available
### Options
| Argument | Default | Description |
|----------|---------|-------------|
| `--base-model` | `unsloth/Qwen3-0.6B-Base-unsloth-bnb-4bit` | Base model |
| `--max-steps` | 500 | Training steps |
| `--batch-size` | 4 | Per-device batch size |
| `--gradient-accumulation` | 4 | Gradient accumulation steps |
| `--learning-rate` | 2e-4 | Learning rate |
| `--output-repo` | Required | Where to push model |
| `--wandb-project` | None | Wandb project for logging |
### Performance
| Environment | Speed | Why |
|-------------|-------|-----|
| Colab A100 | ~0.36 it/s | Network latency |
| HF Jobs A100 | ~0.74 it/s | Co-located compute |
Streaming is ~2x faster on HF Jobs because compute is co-located with the data.
---
## 🎨 VLM Streaming Fine-tuning (Qwen3-VL)
Fine-tune Vision Language Models with streaming datasets - ideal for large image-text datasets.
**Script:** `vlm-streaming-sft-unsloth-qwen.py`
**Default model:** `unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit`
**Example dataset:** [`davanstrien/iconclass-vlm-sft`](https://huggingface.co/datasets/davanstrien/iconclass-vlm-sft)
> **Note:** This script uses pinned dependencies (`transformers==4.57.1`, `trl==0.22.2`) matching the [official Unsloth Qwen3-VL notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_VL_(7B)-Vision.ipynb) for maximum compatibility.
### Quick Start
```bash
# Run on HF Jobs (recommended)
hf jobs uv run \
--flavor a100-large \
--secrets HF_TOKEN \
-- \
https://huggingface.co/datasets/uv-scripts/training/raw/main/vlm-streaming-sft-unsloth-qwen.py \
--max-steps 500 \
--output-repo your-username/vlm-finetuned
# With Trackio monitoring dashboard
hf jobs uv run \
--flavor a100-large \
--secrets HF_TOKEN \
-- \
https://huggingface.co/datasets/uv-scripts/training/raw/main/vlm-streaming-sft-unsloth-qwen.py \
--max-steps 500 \
--output-repo your-username/vlm-finetuned \
--trackio-space your-username/trackio
```
### Why Streaming for VLMs?
- **No disk space needed** - images stream directly from Hub
- **Works with massive datasets** - train on datasets larger than your storage
- **Memory efficient** - Unsloth uses ~60% less VRAM
- **2x faster** - Unsloth optimizations for Qwen3-VL
### Verified Performance
Tested on HF Jobs with A100-80GB:
| Setting | Value |
|---------|-------|
| Model | Qwen3-VL-8B (4-bit) |
| Dataset | iconclass-vlm-sft |
| Speed | ~3s/step |
| 50 steps | ~3 minutes |
| Starting loss | 4.3 |
| Final loss | ~0.85 |
### Options
| Argument | Default | Description |
|----------|---------|-------------|
| `--base-model` | `unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit` | Base VLM model |
| `--dataset` | `davanstrien/iconclass-vlm-sft` | Dataset with images + messages |
| `--max-steps` | 500 | Training steps (required for streaming) |
| `--batch-size` | 2 | Per-device batch size |
| `--gradient-accumulation` | 4 | Gradient accumulation steps |
| `--learning-rate` | 2e-4 | Learning rate |
| `--lora-r` | 16 | LoRA rank |
| `--lora-alpha` | 16 | LoRA alpha (same as r per Unsloth notebook) |
| `--output-repo` | Required | Where to push model |
| `--trackio-space` | None | HF Space for Trackio dashboard |
### Dataset Format
The script works with **any dataset** that has `images` and `messages` columns in the standard VLM conversation format:
```python
{
"images": [<PIL.Image>], # Single image or list of images
"messages": [
{"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "Describe this image"}]},
{"role": "assistant", "content": [{"type": "text", "text": "The image shows..."}]}
]
}
```
**Compatible datasets:**
- [`davanstrien/iconclass-vlm-sft`](https://huggingface.co/datasets/davanstrien/iconclass-vlm-sft) - Art iconography classification
- Any dataset following the [Unsloth VLM format](https://docs.unsloth.ai/basics/vision-finetuning)
### Calculating Steps from Dataset Size
Since streaming datasets don't expose their length, use this formula:
```
steps = dataset_size / (batch_size * gradient_accumulation)
```
For example, with 10,000 samples, batch_size=2, gradient_accumulation=4:
```
steps = 10000 / (2 * 4) = 1250 steps for 1 epoch
```
---
## 🚀 Running on HF Jobs
```bash
# Basic usage
hf jobs uv run latin-llm-streaming.py --flavor a100-large --secrets HF_TOKEN
# With timeout for long training
hf jobs uv run latin-llm-streaming.py --flavor a100-large --timeout 2h --secrets HF_TOKEN
# Pass script arguments after --
hf jobs uv run latin-llm-streaming.py --flavor a100-large -- --max-steps 1000 --batch-size 8
```
### Available Flavors
- `a100-large` - 80GB VRAM (recommended)
- `a10g-large` - 24GB VRAM
- `t4-small` - 16GB VRAM
---
## 🔗 Resources
- [Unsloth](https://github.com/unslothai/unsloth) - 2x faster training
- [HF Jobs Docs](https://huggingface.co/docs/huggingface_hub/guides/jobs)
- [Datasets Streaming](https://huggingface.co/docs/datasets/stream)
- [Streaming Datasets Blog](https://huggingface.co/blog/streaming-datasets)
---
Made with 🦥 [Unsloth](https://github.com/unslothai/unsloth)
|