tropico0313 commited on
Commit
7bb07f0
·
verified ·
1 Parent(s): e8f873c

Upload LoRA adapter (README written by author)

Browse files
Files changed (3) hide show
  1. README.md +32 -16
  2. adapter_config.json +5 -5
  3. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -10,33 +10,51 @@ pipeline_tag: text-generation
10
  tags:
11
  - qlora
12
  - lora
 
13
  - structured-output
 
14
  ---
15
 
16
- qwen3-4b-structured-output-lora
17
 
18
  This repository provides a **LoRA adapter** fine-tuned from
19
- **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
20
 
21
- This repository contains **LoRA adapter weights only**.
22
  The base model must be loaded separately.
23
 
24
  ## Training Objective
25
 
26
- This adapter is trained to improve **structured output accuracy**
27
  (JSON / YAML / XML / TOML / CSV).
28
 
29
- Loss is applied only to the final assistant output,
30
- while intermediate reasoning (Chain-of-Thought) is masked.
 
 
 
 
 
 
 
 
 
 
 
 
 
31
 
32
  ## Training Configuration
33
 
34
  - Base model: Qwen/Qwen3-4B-Instruct-2507
35
- - Method: QLoRA (4-bit)
36
  - Max sequence length: 1024
37
  - Epochs: 1
38
  - Learning rate: 3e-05
39
- - LoRA: r=48, alpha=96
 
 
 
40
 
41
  ## Usage
42
 
@@ -46,7 +64,7 @@ from peft import PeftModel
46
  import torch
47
 
48
  base = "Qwen/Qwen3-4B-Instruct-2507"
49
- adapter = "your_id/your-repo"
50
 
51
  tokenizer = AutoTokenizer.from_pretrained(base)
52
  model = AutoModelForCausalLM.from_pretrained(
@@ -55,11 +73,9 @@ model = AutoModelForCausalLM.from_pretrained(
55
  device_map="auto",
56
  )
57
  model = PeftModel.from_pretrained(model, adapter)
58
- ```
59
-
60
- ## Sources & Terms (IMPORTANT)
61
-
62
- Training data: u-10bei/structured_data_with_cot_dataset_512_v2
63
 
64
- Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License.
65
- Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
 
 
 
 
10
  tags:
11
  - qlora
12
  - lora
13
+ - unsloth
14
  - structured-output
15
+ - structeval
16
  ---
17
 
18
+ # qwen3-4b-structured-output-lora
19
 
20
  This repository provides a **LoRA adapter** fine-tuned from
21
+ **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit) with Unsloth**.
22
 
23
+ ⚠️ This repository contains **LoRA adapter weights only**.
24
  The base model must be loaded separately.
25
 
26
  ## Training Objective
27
 
28
+ This adapter is trained to improve structured output accuracy
29
  (JSON / YAML / XML / TOML / CSV).
30
 
31
+ Loss is applied only to the final assistant output (**assistant-only loss**).
32
+
33
+ Chain-of-Thought masking: Enabled
34
+ Learning mode: after_marker
35
+
36
+ ## Data Preprocessing
37
+
38
+ Rule-based normalization was applied before training:
39
+
40
+ - Extracting content after output markers
41
+ - Removing code fences (```json / ```yaml / ```xml / ```toml)
42
+ - Removing leading boilerplate and trailing notes
43
+ - Recursive JSON exact-match deduplication
44
+
45
+ Dedupe enabled: Yes
46
 
47
  ## Training Configuration
48
 
49
  - Base model: Qwen/Qwen3-4B-Instruct-2507
50
+ - Method: QLoRA (4-bit) + Unsloth
51
  - Max sequence length: 1024
52
  - Epochs: 1
53
  - Learning rate: 3e-05
54
+ - Warmup ratio: 0.06
55
+ - Weight decay: 0.02
56
+ - LoRA: r=48, alpha=96, dropout=0.06
57
+ - Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
58
 
59
  ## Usage
60
 
 
64
  import torch
65
 
66
  base = "Qwen/Qwen3-4B-Instruct-2507"
67
+ adapter = "tropico0313/my-lora-test"
68
 
69
  tokenizer = AutoTokenizer.from_pretrained(base)
70
  model = AutoModelForCausalLM.from_pretrained(
 
73
  device_map="auto",
74
  )
75
  model = PeftModel.from_pretrained(model, adapter)
 
 
 
 
 
76
 
77
+ Sources & Terms (IMPORTANT)
78
+ Training dataset: u-10bei/structured_data_with_cot_dataset_512_v2
79
+ Dataset License: MIT License.
80
+ Users must comply with the MIT license (including copyright notice)
81
+ and the base model's original terms of use.
adapter_config.json CHANGED
@@ -33,13 +33,13 @@
33
  "rank_pattern": {},
34
  "revision": null,
35
  "target_modules": [
36
- "gate_proj",
37
- "v_proj",
38
  "k_proj",
39
- "q_proj",
40
- "up_proj",
41
  "o_proj",
42
- "down_proj"
 
 
 
 
43
  ],
44
  "target_parameters": null,
45
  "task_type": "CAUSAL_LM",
 
33
  "rank_pattern": {},
34
  "revision": null,
35
  "target_modules": [
 
 
36
  "k_proj",
 
 
37
  "o_proj",
38
+ "gate_proj",
39
+ "down_proj",
40
+ "q_proj",
41
+ "v_proj",
42
+ "up_proj"
43
  ],
44
  "target_parameters": null,
45
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:577a4ee44834ef2178475819f5e1648eae5c9eecaf92799b7d82c1e473ab22f9
3
  size 396429608
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8508d63599150669c60e6d2767b972bf0657a0a474494331767cdcee3f52734d
3
  size 396429608