viewer: false
tags:
- uv-script
- training
- vlm
- unsloth
- iconclass
- fine-tuning
VLM Training with Unsloth
Fine-tune Vision-Language Models efficiently using Unsloth - get 2x faster training with lower memory usage!
π¨ Example: Iconclass VLM
This directory contains scripts for fine-tuning VLMs to generate Iconclass metadata codes from artwork images. Iconclass is a hierarchical classification system used in art history and cultural heritage.
What You'll Train
Given an artwork image, the model outputs structured JSON:
{
"iconclass-codes": ["25H213", "25H216", "25I"]
}
Where codes represent:
25H213: river25H216: waterfall25I: city-view with man-made constructions
π Quick Start
Option 1: Run on HF Jobs (Recommended)
# Set your HF token
export HF_TOKEN=your_token_here
# Submit training job
python submit_training_job.py
That's it! Your model will train on cloud GPUs and automatically push to the Hub.
Option 2: Run Locally (Requires GPU)
# Install UV (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Run training
uv run iconclass-vlm-sft.py \
--base-model Qwen/Qwen3-VL-8B-Instruct \
--dataset davanstrien/iconclass-vlm-sft \
--output-model your-username/iconclass-vlm
Option 3: Quick Test (100 steps)
uv run iconclass-vlm-sft.py \
--base-model Qwen/Qwen3-VL-8B-Instruct \
--dataset davanstrien/iconclass-vlm-sft \
--output-model your-username/iconclass-vlm-test \
--max-steps 100
π Requirements
For HF Jobs
- Hugging Face account with Jobs access
- HF token with write permissions
For Local Training
- CUDA-capable GPU (A100 recommended, A10G works)
- 40GB+ VRAM for 8B models (with 4-bit quantization)
- Python 3.11+
- UV installed
ποΈ Configuration
Quick Config via Python Script
Edit submit_training_job.py:
# Model and dataset
BASE_MODEL = "Qwen/Qwen3-VL-8B-Instruct"
DATASET = "davanstrien/iconclass-vlm-sft"
OUTPUT_MODEL = "your-username/iconclass-vlm"
# Training settings
BATCH_SIZE = 2
GRADIENT_ACCUMULATION = 8
LEARNING_RATE = 2e-5
MAX_STEPS = None # Auto-calculate for 1 epoch
# LoRA settings
LORA_R = 16
LORA_ALPHA = 32
# GPU
GPU_FLAVOR = "a100-large" # or "a100", "a10g-large"
Full CLI Options
uv run iconclass-vlm-sft.py --help
Key arguments:
| Argument | Default | Description |
|---|---|---|
--base-model |
Required | Base VLM (e.g., Qwen/Qwen3-VL-8B-Instruct) |
--dataset |
Required | Training dataset on HF Hub |
--output-model |
Required | Where to push your model |
--lora-r |
16 | LoRA rank (higher = more capacity) |
--lora-alpha |
32 | LoRA alpha (usually 2Γr) |
--learning-rate |
2e-5 | Learning rate |
--batch-size |
2 | Per-device batch size |
--gradient-accumulation |
8 | Gradient accumulation steps |
--max-steps |
Auto | Total training steps |
--num-epochs |
1.0 | Epochs (if max-steps not set) |
ποΈ Architecture
What Makes This Fast?
Unsloth Optimizations: 2x faster training through:
- Optimized CUDA kernels
- Better memory management
- Efficient gradient checkpointing
4-bit Quantization:
- Loads model in 4-bit precision
- Dramatically reduces VRAM usage
- Minimal impact on quality with LoRA
LoRA (Low-Rank Adaptation):
- Only trains 0.1-1% of parameters
- Much faster than full fine-tuning
- Easy to merge back or share
Training Flow
Dataset (HF Hub)
β
FastVisionModel.from_pretrained (4-bit)
β
Apply LoRA adapters
β
SFTTrainer (Unsloth-optimized)
β
Push to Hub with model card
π Expected Performance
Training Time (Qwen3-VL-8B on A100)
| Dataset Size | Batch Config | Time | Cost (est.) |
|---|---|---|---|
| 44K samples | BS=2, GA=8 | ~4h | $16 |
| 10K samples | BS=2, GA=8 | ~1h | $4 |
| 1K samples | BS=2, GA=8 | ~10min | $0.70 |
BS = Batch Size, GA = Gradient Accumulation
GPU Requirements
| Model Size | Min GPU | Recommended | VRAM Usage |
|---|---|---|---|
| 3B-4B | A10G | A100 | ~20GB |
| 7B-8B | A100 | A100 | ~35GB |
| 13B+ | A100 (80GB) | A100 (80GB) | ~60GB |
π Monitoring Your Job
Via CLI
# Check status
hfjobs status your-job-id
# Stream logs
hfjobs logs your-job-id --follow
# List all jobs
hfjobs list
Via Python
from huggingface_hub import HfApi
api = HfApi()
job = api.get_job("your-job-id")
print(job.status)
print(job.logs())
Via Web
Your job URL: https://huggingface.co/jobs/your-username/your-job-id
π― Using Your Fine-Tuned Model
from unsloth import FastVisionModel
from PIL import Image
# Load your model
model, tokenizer = FastVisionModel.from_pretrained(
model_name="your-username/iconclass-vlm",
load_in_4bit=True,
max_seq_length=2048,
)
FastVisionModel.for_inference(model)
# Prepare input
image = Image.open("artwork.jpg")
prompt = "Extract ICONCLASS labels for this image."
messages = [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": prompt},
],
}
]
# Apply chat template
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
).to("cuda")
# Generate
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
# {"iconclass-codes": ["31A235", "31A24(+1)", "61B(+54)"]}
π¦ Files in This Directory
| File | Purpose |
|---|---|
iconclass-vlm-sft.py |
Main training script (UV script) |
submit_training_job.py |
Helper to submit HF Jobs |
README.md |
This file |
π οΈ Troubleshooting
Out of Memory?
Reduce batch size or increase gradient accumulation:
--batch-size 1 --gradient-accumulation 16
Training Too Slow?
Increase batch size if you have VRAM:
--batch-size 4 --gradient-accumulation 4
Model Not Learning?
Try adjusting learning rate:
--learning-rate 5e-5 # Higher
--learning-rate 1e-5 # Lower
Or increase LoRA rank:
--lora-r 32 --lora-alpha 64
Jobs Failing?
Check logs:
hfjobs logs your-job-id
Common issues:
- HF_TOKEN not set correctly
- Output model repo doesn't exist (create it first)
- GPU out of memory (reduce batch size)
π Related Resources
- Unsloth: https://github.com/unslothai/unsloth
- Unsloth Docs: https://docs.unsloth.ai/
- TRL: https://github.com/huggingface/trl
- HF Jobs: https://huggingface.co/docs/hub/spaces-sdks-jobs
- UV: https://docs.astral.sh/uv/
- Iconclass: https://iconclass.org
- Blog Post: https://danielvanstrien.xyz/posts/2025/iconclass-vlm-sft/
π‘ Tips
- Start Small: Test with
--max-steps 100before full training - Use Wandb: Add
--report-to wandbfor better monitoring - Save Often: Use
--save-steps 50for checkpoints - Multiple GPUs: Script automatically uses all available GPUs
- Resume Training: Load from checkpoint with
--resume-from-checkpoint
π Citation
If you use this training setup, please cite:
@misc{iconclass-vlm-training,
author = {Daniel van Strien},
title = {Efficient VLM Fine-tuning with Unsloth for Art History},
year = {2025},
publisher = {GitHub},
howpublished = {\url{https://github.com/davanstrien/uv-scripts}}
}
Made with π¦₯ Unsloth β’ Powered by π€ UV Scripts