lora-sft-v6 / README.md
astom-M's picture
Upload README.md with huggingface_hub
28daa2e verified
metadata
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
  - u-10bei/structured_data_with_cot_dataset_512_v2
language:
  - en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
  - qlora
  - lora
  - structured-output

Qwen3-4B Structured Output LoRA (v6)

LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth). LoRA adapter weights only - base model must be loaded separately.

Training Objective

Improve structured output accuracy (JSON / YAML / XML / TOML / CSV). Loss applied only to final assistant output (CoT masked).

Training Configuration

  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Method: QLoRA (4-bit) + rsLoRA
  • Max sequence length: 1024
  • Epochs: 2
  • Learning rate: 2e-06
  • LoRA: r=64, alpha=128
  • Effective batch size: 16

Sources & Terms

Training data: u-10bei/structured_data_with_cot_dataset_512_v2 Dataset License: MIT License.