| | --- |
| | viewer: false |
| | tags: |
| | - uv-script |
| | - training |
| | - vlm |
| | - unsloth |
| | - iconclass |
| | - fine-tuning |
| | --- |
| | |
| | # VLM Training with Unsloth |
| |
|
| | Fine-tune Vision-Language Models efficiently using [Unsloth](https://github.com/unslothai/unsloth) - get 2x faster training with lower memory usage! |
| |
|
| | ## π¨ Example: Iconclass VLM |
| |
|
| | This directory contains scripts for fine-tuning VLMs to generate [Iconclass](https://iconclass.org) metadata codes from artwork images. Iconclass is a hierarchical classification system used in art history and cultural heritage. |
| |
|
| | ### What You'll Train |
| |
|
| | Given an artwork image, the model outputs structured JSON: |
| |
|
| | ```json |
| | { |
| | "iconclass-codes": ["25H213", "25H216", "25I"] |
| | } |
| | ``` |
| |
|
| | Where codes represent: |
| | - `25H213`: river |
| | - `25H216`: waterfall |
| | - `25I`: city-view with man-made constructions |
| |
|
| | ## π Quick Start |
| |
|
| | ### Option 1: Run on HF Jobs (Recommended) |
| |
|
| | ```bash |
| | # Set your HF token |
| | export HF_TOKEN=your_token_here |
| | |
| | # Submit training job |
| | python submit_training_job.py |
| | ``` |
| |
|
| | That's it! Your model will train on cloud GPUs and automatically push to the Hub. |
| |
|
| | ### Option 2: Run Locally (Requires GPU) |
| |
|
| | ```bash |
| | # Install UV (if not already installed) |
| | curl -LsSf https://astral.sh/uv/install.sh | sh |
| | |
| | # Run training |
| | uv run iconclass-vlm-sft.py \ |
| | --base-model Qwen/Qwen3-VL-8B-Instruct \ |
| | --dataset davanstrien/iconclass-vlm-sft \ |
| | --output-model your-username/iconclass-vlm |
| | ``` |
| |
|
| | ### Option 3: Quick Test (100 steps) |
| |
|
| | ```bash |
| | uv run iconclass-vlm-sft.py \ |
| | --base-model Qwen/Qwen3-VL-8B-Instruct \ |
| | --dataset davanstrien/iconclass-vlm-sft \ |
| | --output-model your-username/iconclass-vlm-test \ |
| | --max-steps 100 |
| | ``` |
| |
|
| | ## π Requirements |
| |
|
| | ### For HF Jobs |
| | - Hugging Face account with Jobs access |
| | - HF token with write permissions |
| |
|
| | ### For Local Training |
| | - CUDA-capable GPU (A100 recommended, A10G works) |
| | - 40GB+ VRAM for 8B models (with 4-bit quantization) |
| | - Python 3.11+ |
| | - [UV](https://docs.astral.sh/uv/) installed |
| |
|
| | ## ποΈ Configuration |
| |
|
| | ### Quick Config via Python Script |
| |
|
| | Edit `submit_training_job.py`: |
| |
|
| | ```python |
| | # Model and dataset |
| | BASE_MODEL = "Qwen/Qwen3-VL-8B-Instruct" |
| | DATASET = "davanstrien/iconclass-vlm-sft" |
| | OUTPUT_MODEL = "your-username/iconclass-vlm" |
| | |
| | # Training settings |
| | BATCH_SIZE = 2 |
| | GRADIENT_ACCUMULATION = 8 |
| | LEARNING_RATE = 2e-5 |
| | MAX_STEPS = None # Auto-calculate for 1 epoch |
| | |
| | # LoRA settings |
| | LORA_R = 16 |
| | LORA_ALPHA = 32 |
| | |
| | # GPU |
| | GPU_FLAVOR = "a100-large" # or "a100", "a10g-large" |
| | ``` |
| |
|
| | ### Full CLI Options |
| |
|
| | ```bash |
| | uv run iconclass-vlm-sft.py --help |
| | ``` |
| |
|
| | Key arguments: |
| |
|
| | | Argument | Default | Description | |
| | |----------|---------|-------------| |
| | | `--base-model` | Required | Base VLM (e.g., Qwen/Qwen3-VL-8B-Instruct) | |
| | | `--dataset` | Required | Training dataset on HF Hub | |
| | | `--output-model` | Required | Where to push your model | |
| | | `--lora-r` | 16 | LoRA rank (higher = more capacity) | |
| | | `--lora-alpha` | 32 | LoRA alpha (usually 2Γr) | |
| | | `--learning-rate` | 2e-5 | Learning rate | |
| | | `--batch-size` | 2 | Per-device batch size | |
| | | `--gradient-accumulation` | 8 | Gradient accumulation steps | |
| | | `--max-steps` | Auto | Total training steps | |
| | | `--num-epochs` | 1.0 | Epochs (if max-steps not set) | |
| |
|
| | ## ποΈ Architecture |
| |
|
| | ### What Makes This Fast? |
| |
|
| | 1. **Unsloth Optimizations**: 2x faster training through: |
| | - Optimized CUDA kernels |
| | - Better memory management |
| | - Efficient gradient checkpointing |
| |
|
| | 2. **4-bit Quantization**: |
| | - Loads model in 4-bit precision |
| | - Dramatically reduces VRAM usage |
| | - Minimal impact on quality with LoRA |
| |
|
| | 3. **LoRA (Low-Rank Adaptation)**: |
| | - Only trains 0.1-1% of parameters |
| | - Much faster than full fine-tuning |
| | - Easy to merge back or share |
| |
|
| | ### Training Flow |
| |
|
| | ``` |
| | Dataset (HF Hub) |
| | β |
| | FastVisionModel.from_pretrained (4-bit) |
| | β |
| | Apply LoRA adapters |
| | β |
| | SFTTrainer (Unsloth-optimized) |
| | β |
| | Push to Hub with model card |
| | ``` |
| |
|
| | ## π Expected Performance |
| |
|
| | ### Training Time (Qwen3-VL-8B on A100) |
| |
|
| | | Dataset Size | Batch Config | Time | Cost (est.) | |
| | |--------------|--------------|------|-------------| |
| | | 44K samples | BS=2, GA=8 | ~4h | $16 | |
| | | 10K samples | BS=2, GA=8 | ~1h | $4 | |
| | | 1K samples | BS=2, GA=8 | ~10min | $0.70 | |
| |
|
| | *BS = Batch Size, GA = Gradient Accumulation* |
| |
|
| | ### GPU Requirements |
| |
|
| | | Model Size | Min GPU | Recommended | VRAM Usage | |
| | |------------|---------|-------------|------------| |
| | | 3B-4B | A10G | A100 | ~20GB | |
| | | 7B-8B | A100 | A100 | ~35GB | |
| | | 13B+ | A100 (80GB) | A100 (80GB) | ~60GB | |
| |
|
| | ## π Monitoring Your Job |
| |
|
| | ### Via CLI |
| |
|
| | ```bash |
| | # Check status |
| | hfjobs status your-job-id |
| | |
| | # Stream logs |
| | hfjobs logs your-job-id --follow |
| | |
| | # List all jobs |
| | hfjobs list |
| | ``` |
| |
|
| | ### Via Python |
| |
|
| | ```python |
| | from huggingface_hub import HfApi |
| | |
| | api = HfApi() |
| | job = api.get_job("your-job-id") |
| | |
| | print(job.status) |
| | print(job.logs()) |
| | ``` |
| |
|
| | ### Via Web |
| |
|
| | Your job URL: `https://huggingface.co/jobs/your-username/your-job-id` |
| |
|
| | ## π― Using Your Fine-Tuned Model |
| |
|
| | ```python |
| | from unsloth import FastVisionModel |
| | from PIL import Image |
| | |
| | # Load your model |
| | model, tokenizer = FastVisionModel.from_pretrained( |
| | model_name="your-username/iconclass-vlm", |
| | load_in_4bit=True, |
| | max_seq_length=2048, |
| | ) |
| | FastVisionModel.for_inference(model) |
| | |
| | # Prepare input |
| | image = Image.open("artwork.jpg") |
| | prompt = "Extract ICONCLASS labels for this image." |
| | |
| | messages = [ |
| | { |
| | "role": "user", |
| | "content": [ |
| | {"type": "image"}, |
| | {"type": "text", "text": prompt}, |
| | ], |
| | } |
| | ] |
| | |
| | # Apply chat template |
| | inputs = tokenizer.apply_chat_template( |
| | messages, |
| | add_generation_prompt=True, |
| | return_tensors="pt", |
| | ).to("cuda") |
| | |
| | # Generate |
| | outputs = model.generate( |
| | **inputs, |
| | max_new_tokens=256, |
| | temperature=0.7, |
| | top_p=0.9, |
| | ) |
| | |
| | response = tokenizer.decode(outputs[0], skip_special_tokens=True) |
| | print(response) |
| | # {"iconclass-codes": ["31A235", "31A24(+1)", "61B(+54)"]} |
| | ``` |
| |
|
| | ## π¦ Files in This Directory |
| |
|
| | | File | Purpose | |
| | |------|---------| |
| | | `iconclass-vlm-sft.py` | Main training script (UV script) | |
| | | `submit_training_job.py` | Helper to submit HF Jobs | |
| | | `README.md` | This file | |
| |
|
| | ## π οΈ Troubleshooting |
| |
|
| | ### Out of Memory? |
| |
|
| | Reduce batch size or increase gradient accumulation: |
| | ```bash |
| | --batch-size 1 --gradient-accumulation 16 |
| | ``` |
| |
|
| | ### Training Too Slow? |
| |
|
| | Increase batch size if you have VRAM: |
| | ```bash |
| | --batch-size 4 --gradient-accumulation 4 |
| | ``` |
| |
|
| | ### Model Not Learning? |
| |
|
| | Try adjusting learning rate: |
| | ```bash |
| | --learning-rate 5e-5 # Higher |
| | --learning-rate 1e-5 # Lower |
| | ``` |
| |
|
| | Or increase LoRA rank: |
| | ```bash |
| | --lora-r 32 --lora-alpha 64 |
| | ``` |
| |
|
| | ### Jobs Failing? |
| |
|
| | Check logs: |
| | ```bash |
| | hfjobs logs your-job-id |
| | ``` |
| |
|
| | Common issues: |
| | - HF_TOKEN not set correctly |
| | - Output model repo doesn't exist (create it first) |
| | - GPU out of memory (reduce batch size) |
| | |
| | ## π Related Resources |
| | |
| | - **Unsloth**: https://github.com/unslothai/unsloth |
| | - **Unsloth Docs**: https://docs.unsloth.ai/ |
| | - **TRL**: https://github.com/huggingface/trl |
| | - **HF Jobs**: https://huggingface.co/docs/hub/spaces-sdks-jobs |
| | - **UV**: https://docs.astral.sh/uv/ |
| | - **Iconclass**: https://iconclass.org |
| | - **Blog Post**: https://danielvanstrien.xyz/posts/2025/iconclass-vlm-sft/ |
| | |
| | ## π‘ Tips |
| | |
| | 1. **Start Small**: Test with `--max-steps 100` before full training |
| | 2. **Use Wandb**: Add `--report-to wandb` for better monitoring |
| | 3. **Save Often**: Use `--save-steps 50` for checkpoints |
| | 4. **Multiple GPUs**: Script automatically uses all available GPUs |
| | 5. **Resume Training**: Load from checkpoint with `--resume-from-checkpoint` |
| | |
| | ## π Citation |
| | |
| | If you use this training setup, please cite: |
| | |
| | ```bibtex |
| | @misc{iconclass-vlm-training, |
| | author = {Daniel van Strien}, |
| | title = {Efficient VLM Fine-tuning with Unsloth for Art History}, |
| | year = {2025}, |
| | publisher = {GitHub}, |
| | howpublished = {\url{https://github.com/davanstrien/uv-scripts}} |
| | } |
| | ``` |
| | |
| | --- |
| | |
| | Made with π¦₯ [Unsloth](https://github.com/unslothai/unsloth) β’ |
| | Powered by π€ [UV Scripts](https://huggingface.co/uv-scripts) |
| | |