The Dataset Viewer has been disabled on this dataset.
Unsloth + HF Jobs: Zero-Setup Efficient Training
Fine-tune LLMs and VLMs on cloud GPUs without any environment setup. Unsloth provides 2x faster training with 60% less VRAM, and HF Jobs gives you on-demand A100s that stream data directly from the Hub.
No Docker. No pip install. No CUDA setup. Just run.
What You Need
- A Hugging Face account with a token
- The HF CLI - install with:
curl -LsSf https://hf.co/cli/install.sh | bash - A dataset on the Hub (see format requirements below)
Prepare Your Data
For VLM Fine-tuning (images + text)
Your dataset needs two columns: images and messages.
{
"images": [<PIL.Image>], # List of images
"messages": [
{
"role": "user",
"content": [
{"type": "image"},
{"type": "text", "text": "What's in this image?"}
]
},
{
"role": "assistant",
"content": [
{"type": "text", "text": "A golden retriever playing fetch in a park."}
]
}
]
}
See davanstrien/iconclass-vlm-sft for a working example.
For Continued Pretraining (text only)
Any dataset with a text column works:
{"text": "Your domain-specific text here..."}
Use --text-column if your column has a different name.
Step 1: Test Locally (Optional)
Make sure your dataset format is correct by running a quick local test:
# Check the script works (shows help)
uv run https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py --help
Step 2: Run on HF Jobs
Fine-tune a Vision-Language Model
hf jobs uv run \
https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py \
--flavor a100-large --secrets HF_TOKEN --timeout 4h \
-- --dataset your-username/your-vlm-dataset \
--num-epochs 1 \
--eval-split 0.2 \
--output-repo your-username/my-vlm
What this does:
- Spins up an A100 GPU on HF Jobs
- Downloads and installs all dependencies automatically
- Loads Qwen3-VL-8B with Unsloth optimizations
- Trains for 1 epoch, holding out 20% for evaluation
- Uploads your fine-tuned model to the Hub
Continued Pretraining on Domain Text
hf jobs uv run \
https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/continued-pretraining.py \
--flavor a100-large --secrets HF_TOKEN \
-- --dataset your-username/domain-corpus \
--text-column content \
--max-steps 1000 \
--output-repo your-username/domain-llm
Step 3: Monitor Progress (Optional)
Add Trackio for real-time training metrics:
hf jobs uv run \
https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py \
--flavor a100-large --secrets HF_TOKEN \
-- --dataset your-username/dataset \
--trackio-space your-username/trackio \
--output-repo your-username/my-model
Available Scripts
| Script | Base Model | Best For |
|---|---|---|
sft-qwen3-vl.py |
Qwen3-VL-8B | High-quality VLM fine-tuning |
sft-gemma3-vlm.py |
Gemma 3 4B | Lightweight/faster VLM tasks |
continued-pretraining.py |
Qwen3-0.6B | Domain adaptation, new languages |
Common Options
| Option | Description | Default |
|---|---|---|
--dataset |
HF dataset ID | required |
--output-repo |
Where to save trained model | required |
--max-steps |
Number of training steps | 500 |
--num-epochs |
Train for N epochs instead of steps | - |
--eval-split |
Fraction for evaluation (e.g., 0.2) | 0 (disabled) |
--batch-size |
Per-device batch size | 2 |
--learning-rate |
Learning rate | 2e-4 |
--trackio-space |
HF Space for live monitoring | - |
Run any script with --help to see all options:
uv run https://huggingface.co/datasets/uv-scripts/unsloth-jobs/raw/main/sft-qwen3-vl.py --help
Tips
- Start small: Use
--max-steps 10to verify everything works before a full run - Use eval splits:
--eval-split 0.2helps detect overfitting - Check costs: A100-large is ~$4/hr, estimate your training time first
- Streaming for large datasets: Add
--streamingif your dataset is very large
How It Works
hf jobs uv runspins up an A100 GPU- UV reads dependencies from the script header and installs them
- Unsloth loads the model with 4-bit quantization and LoRA
- Training streams data directly from the Hub (fast!)
- Your fine-tuned adapter uploads to the Hub automatically
Learn More
- HF Jobs Quickstart - Getting started with HF Jobs
- Unsloth Documentation - Training optimizations
- UV Scripts Guide - How inline dependencies work
- Downloads last month
- 26