qwen3-4b-sft-v5
LoRA adapter for Qwen/Qwen3-4B-Instruct-2507, fine-tuned on structured output tasks (JSON, CSV, XML, YAML, TOML).
Training Details
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Dataset: u-10bei/structured_data_with_cot_dataset_512_v5
- LoRA rank: 64, alpha: 128, dropout: 0.0
- Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- Epochs: 3
- Learning rate: 2e-5 (cosine scheduler, warmup 10%)
- Batch size: 4, gradient accumulation: 4 (effective batch size: 16)
- Max sequence length: 2048
- Training precision: bf16
- Loss masking: assistant-only with CoT masking (Output: marker)
Training Results
- Total steps: 810
- Final training loss: 0.37 (from 2.49)
- Training time: ~113 minutes on NVIDIA GB10 (DGX Spark)
Hardware
- NVIDIA DGX Spark (ARM64)
- GPU: NVIDIA GB10 (sm_121, Blackwell)
- VRAM: 119.7 GB unified memory
- PyTorch nightly (2.11.0.dev20260105+cu130)
Framework Versions
- PEFT 0.18.1
- Transformers (latest)
- Accelerate (latest)
- Downloads last month
- 14
Model tree for schroneko/qwen3-4b-sft-v5
Base model
Qwen/Qwen3-4B-Instruct-2507