Text Generation
PEFT
Safetensors
English
qlora
lora
structured-output
File size: 2,030 Bytes
760ec51
 
 
7415f91
76e260d
 
760ec51
 
 
 
 
 
e7858cb
 
 
760ec51
 
e7858cb
760ec51
e7858cb
 
760ec51
e7858cb
 
760ec51
 
e7858cb
 
 
 
4ffcf39
 
760ec51
 
 
e7858cb
 
89192a1
87db9ce
adc8651
89192a1
760ec51
 
 
 
 
 
 
 
 
67c23f0
760ec51
 
 
 
 
e7858cb
760ec51
 
b96f0e4
95fc8ba
e7858cb
 
71c41fe
760ec51
4ffcf39
 
7415f91
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
- u-10bei/structured_data_with_cot_dataset_512_v2
- u-10bei/structured_data_with_cot_dataset_512_v4
- u-10bei/structured_data_with_cot_dataset_512_v5
language:
- en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
- qlora
- lora
- structured-output
---

#qwen3-4b-structured-output-lora

This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.

This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.

## Training Objective

This adapter is trained to improve **structured output accuracy**
(JSON / YAML / XML / TOML / CSV).

Chain-of-Thought reasoning was removed from training data, 
and loss is applied directly to the final structured output.

## Training Configuration

- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: QLoRA (4-bit)
- Max sequence length: 1024
- Epochs: 2
- Learning rate: 1e-06
- LoRA: r=96, alpha=192

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "mt628754/qwen3-struct-sft"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
```

## Sources & Terms (IMPORTANT)

Training data: u-10bei/structured_data_with_cot_dataset_512_v2, u-10bei/structured_data_with_cot_dataset_512_v4, u-10bei/structured_data_with_cot_dataset_512_v5

Data preprocessing: Combined the above three versions with removal of unparseable outputs, deduplication, and removal of Chain-of-Thought reasoning from assistant responses.

Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License.
Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.