Qwen3-4B Structured Output LoRA (v5)
LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth). LoRA adapter weights only - base model must be loaded separately.
Training Objective
Improve structured output accuracy (JSON / YAML / XML / TOML / CSV). Loss applied only to final assistant output (CoT masked).
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: QLoRA (4-bit) + rsLoRA
- Max sequence length: 1024
- Epochs: 2
- Learning rate: 2e-05
- LoRA: r=64, alpha=128
- Effective batch size: 16
Sources & Terms
Training data: u-10bei/structured_data_with_cot_dataset_512_v2 Dataset License: MIT License.
- Downloads last month
- 11
Model tree for astom-M/lora-sft-v5
Base model
Qwen/Qwen3-4B-Instruct-2507