File size: 2,077 Bytes
3f10fe9
8e7d4eb
 
 
 
 
 
3f10fe9
8e7d4eb
096e5c0
8e7d4eb
096e5c0
8e7d4eb
 
3f10fe9
 
8e7d4eb
3f10fe9
8e7d4eb
3f10fe9
8e7d4eb
 
3f10fe9
8e7d4eb
 
3f10fe9
8e7d4eb
 
 
 
 
 
 
 
3f10fe9
8e7d4eb
 
3f10fe9
8e7d4eb
 
 
3f10fe9
8e7d4eb
3f10fe9
8e7d4eb
 
 
 
 
 
3f10fe9
8e7d4eb
 
3f10fe9
8e7d4eb
 
 
3f10fe9
8e7d4eb
 
 
 
 
3f10fe9
8e7d4eb
 
 
 
3f10fe9
8e7d4eb
 
3f10fe9
 
8e7d4eb
3f10fe9
8e7d4eb
3f10fe9
8e7d4eb
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
- u-10bei/dpo-dataset-qwen-cot
language:
- en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
tags:
- dpo
- unsloth
- qwen
- alignment
---

# lora-structeval-sft-0205-merged-v2

This model is a fine-tuned version of **Qwen/Qwen3-4B-Instruct-2507** using **Direct Preference Optimization (DPO)** via the **Unsloth** library.

This repository contains a **LoRA adapter** trained with DPO on top of the SFT adapter.
You need to load the base model and then load this adapter using PEFT.

## Training Objective
This model has been optimized using DPO to align its responses with preferred outputs, focusing on improving reasoning (Chain-of-Thought) and structured response quality based on the provided preference dataset.

## Training Configuration
- **Base model**: Qwen/Qwen3-4B-Instruct-2507
- **Method**: DPO (Direct Preference Optimization)
- **Epochs**: 1
- **Learning rate**: 5e-08
- **Beta**: 0.05
- **Max sequence length**: 1024
- **LoRA Config**: r=8, alpha=16 (merged into base)

## Usage
Since this is a merged model, you can use it directly with `transformers`.

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "lora-structeval-sft-0205-merged-v2"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Test inference
prompt = "Your question here"

messages = [
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)

inputs = tokenizer(
    text,
    return_tensors="pt",
).to("cuda")

outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))


```

## Sources & License (IMPORTANT)

* **Training Data**: [u-10bei/dpo-dataset-qwen-cot]
* **License**: MIT License. (As per dataset terms).
* **Compliance**: Users must follow the original base model's license terms.