qwen3-4b-structured-output-lora-v7
This repository provides a LoRA adapter (v7) fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).
This repository contains LoRA adapter weights only. The base model must be loaded separately.
Version: v7 โ Data Quality Improvement
This is v7 of the SFT training, focusing on data quality improvement with TOML 3x upsampling, XML escaping, and Deep Structure YAML/JSON. v5.2 achieved 0.74648. v7 aims to improve accuracy by addressing data balance and quality issues.
Changes from v5.2
| Parameter | v5 | v5.2 | Rationale |
|---|---|---|---|
| Dataset | 3,869 samples | 5,135 samples | TOML 3x + XML escape + Deep Structure |
| MAX_SEQ_LEN | 1024 | 1024 | Same |
| Epochs | 2 | 1 | Reduced to prevent overfitting |
| Learning Rate | 5e-6 | 1e-06 | Lower for more stable training |
| Warmup Ratio | 10% | 10% | Increased to reduce early instability |
v7 Key Improvements
- TOML 3x Upsampling: 611 โ 1,833 samples (address TOML underrepresentation)
- XML Escaping: Proper escaping of &, <, >, ", ' characters
- Deep Structure: Added complex nested YAML/JSON data
- Same hyperparameters: Maintained v5.2's proven settings (LR=5e-6, Epoch=1)
Score History
| Version | Data | Score | Notes |
|---|---|---|---|
| v2 | 3,933 | 0.75074 | Best score baseline |
| v5 | 3,869 | 0.73981 | Epoch=2 overfitting |
| v5.2 | 3,869 | 0.74648 | Hyperparam tuning |
| v7 | 5,135 | (pending) | Data quality improvement |
Training Objective
This adapter is trained to improve structured output accuracy (JSON / YAML / XML / TOML / CSV) for the StructEval-T benchmark.
Loss is applied only to the final assistant output, while intermediate reasoning (Chain-of-Thought) is masked.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: QLoRA (4-bit, Unsloth)
- Max sequence length: 1024
- Epochs: 1
- Learning rate: 1e-06
- Warmup ratio: 10%
- Batch size: 2 (effective: 16)
- Gradient accumulation: 8
- LoRA: r=64, alpha=128
- CoT masking: enabled (loss on final output only)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "your_id/qwen3-4b-structured-output-lora-v7"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
base,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
Sources & Terms (IMPORTANT)
Training data: u-10bei/structured_data_with_cot_dataset_512_v2
Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
- Downloads last month
- 2
Model tree for kmd2525/qwen3-4b-structured-output-lora-v7
Base model
Qwen/Qwen3-4B-Instruct-2507