davanstrien HF Staff commited on
Commit
f362feb
·
verified ·
1 Parent(s): fb20afa

Update VLM docs with verified Qwen3-VL config and performance

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md CHANGED
@@ -66,6 +66,106 @@ Streaming is ~2x faster on HF Jobs because compute is co-located with the data.
66
 
67
  ---
68
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
  ## 🚀 Running on HF Jobs
70
 
71
  ```bash
 
66
 
67
  ---
68
 
69
+ ## 🎨 VLM Streaming Fine-tuning (Qwen3-VL)
70
+
71
+ Fine-tune Vision Language Models with streaming datasets - ideal for large image-text datasets.
72
+
73
+ **Script:** `vlm-streaming-sft-unsloth-qwen.py`
74
+ **Default model:** `unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit`
75
+ **Example dataset:** [`davanstrien/iconclass-vlm-sft`](https://huggingface.co/datasets/davanstrien/iconclass-vlm-sft)
76
+
77
+ > **Note:** This script uses pinned dependencies (`transformers==4.57.1`, `trl==0.22.2`) matching the [official Unsloth Qwen3-VL notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen3_VL_(7B)-Vision.ipynb) for maximum compatibility.
78
+
79
+ ### Quick Start
80
+
81
+ ```bash
82
+ # Run on HF Jobs (recommended)
83
+ hf jobs uv run \
84
+ --flavor a100-large \
85
+ --secrets HF_TOKEN \
86
+ -- \
87
+ https://huggingface.co/datasets/uv-scripts/training/raw/main/vlm-streaming-sft-unsloth-qwen.py \
88
+ --max-steps 500 \
89
+ --output-repo your-username/vlm-finetuned
90
+
91
+ # With Trackio monitoring dashboard
92
+ hf jobs uv run \
93
+ --flavor a100-large \
94
+ --secrets HF_TOKEN \
95
+ -- \
96
+ https://huggingface.co/datasets/uv-scripts/training/raw/main/vlm-streaming-sft-unsloth-qwen.py \
97
+ --max-steps 500 \
98
+ --output-repo your-username/vlm-finetuned \
99
+ --trackio-space your-username/trackio
100
+ ```
101
+
102
+ ### Why Streaming for VLMs?
103
+
104
+ - **No disk space needed** - images stream directly from Hub
105
+ - **Works with massive datasets** - train on datasets larger than your storage
106
+ - **Memory efficient** - Unsloth uses ~60% less VRAM
107
+ - **2x faster** - Unsloth optimizations for Qwen3-VL
108
+
109
+ ### Verified Performance
110
+
111
+ Tested on HF Jobs with A100-80GB:
112
+
113
+ | Setting | Value |
114
+ |---------|-------|
115
+ | Model | Qwen3-VL-8B (4-bit) |
116
+ | Dataset | iconclass-vlm-sft |
117
+ | Speed | ~3s/step |
118
+ | 50 steps | ~3 minutes |
119
+ | Starting loss | 4.3 |
120
+ | Final loss | ~0.85 |
121
+
122
+ ### Options
123
+
124
+ | Argument | Default | Description |
125
+ |----------|---------|-------------|
126
+ | `--base-model` | `unsloth/Qwen3-VL-8B-Instruct-unsloth-bnb-4bit` | Base VLM model |
127
+ | `--dataset` | `davanstrien/iconclass-vlm-sft` | Dataset with images + messages |
128
+ | `--max-steps` | 500 | Training steps (required for streaming) |
129
+ | `--batch-size` | 2 | Per-device batch size |
130
+ | `--gradient-accumulation` | 4 | Gradient accumulation steps |
131
+ | `--learning-rate` | 2e-4 | Learning rate |
132
+ | `--lora-r` | 16 | LoRA rank |
133
+ | `--lora-alpha` | 16 | LoRA alpha (same as r per Unsloth notebook) |
134
+ | `--output-repo` | Required | Where to push model |
135
+ | `--trackio-space` | None | HF Space for Trackio dashboard |
136
+
137
+ ### Dataset Format
138
+
139
+ The script works with **any dataset** that has `images` and `messages` columns in the standard VLM conversation format:
140
+
141
+ ```python
142
+ {
143
+ "images": [<PIL.Image>], # Single image or list of images
144
+ "messages": [
145
+ {"role": "user", "content": [{"type": "image"}, {"type": "text", "text": "Describe this image"}]},
146
+ {"role": "assistant", "content": [{"type": "text", "text": "The image shows..."}]}
147
+ ]
148
+ }
149
+ ```
150
+
151
+ **Compatible datasets:**
152
+ - [`davanstrien/iconclass-vlm-sft`](https://huggingface.co/datasets/davanstrien/iconclass-vlm-sft) - Art iconography classification
153
+ - Any dataset following the [Unsloth VLM format](https://docs.unsloth.ai/basics/vision-finetuning)
154
+
155
+ ### Calculating Steps from Dataset Size
156
+
157
+ Since streaming datasets don't expose their length, use this formula:
158
+ ```
159
+ steps = dataset_size / (batch_size * gradient_accumulation)
160
+ ```
161
+
162
+ For example, with 10,000 samples, batch_size=2, gradient_accumulation=4:
163
+ ```
164
+ steps = 10000 / (2 * 4) = 1250 steps for 1 epoch
165
+ ```
166
+
167
+ ---
168
+
169
  ## 🚀 Running on HF Jobs
170
 
171
  ```bash