File size: 1,633 Bytes
bbbbffa
b4fff3b
 
 
 
 
 
bbbbffa
 
 
b4fff3b
bbbbffa
b4fff3b
bbbbffa
 
b4fff3b
bbbbffa
b4fff3b
 
bbbbffa
b4fff3b
 
bbbbffa
b4fff3b
bbbbffa
b4fff3b
 
bbbbffa
b4fff3b
 
bbbbffa
b4fff3b
bbbbffa
b4fff3b
 
 
 
 
 
bbbbffa
b4fff3b
bbbbffa
b4fff3b
 
 
 
bbbbffa
b4fff3b
 
bbbbffa
b4fff3b
 
 
 
 
 
 
 
bbbbffa
b4fff3b
bbbbffa
b4fff3b
bbbbffa
b4fff3b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
- u-10bei/structured_data_with_cot_dataset_512_v2
language:
- en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
- qlora
- lora
- structured-output
---

final_asginment

This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.

This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.

## Training Objective

This adapter is trained to improve **structured output accuracy**
(JSON / YAML / XML / TOML / CSV).

Loss is applied only to the final assistant output,
while intermediate reasoning (Chain-of-Thought) is masked.

## Training Configuration

- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: QLoRA (4-bit)
- Max sequence length: 1024
- Epochs: 2
- Learning rate: 1e-05
- LoRA: r=128, alpha=128

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "marukame332/final_asignment"

tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
    base,
    torch_dtype=torch.float16,
    device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
```

## Sources & Terms (IMPORTANT)

Training data: u-10bei/structured_data_with_cot_dataset_512_v2

Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License.
Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.