qwen3-4b-sft_dpo-qwen-cot-merged
This model is a fine-tuned version of duong942001/SFTmodel using Direct Preference Optimization (DPO) via the Unsloth library.
This repository contains the only adapter.
Training Objective
This model has been optimized using DPO to align its responses with preferred outputs, focusing on improving reasoning (Chain-of-Thought) and structured response quality based on the provided preference dataset.
Training Configuration
- Base model:
- Method: DPO (Direct Preference Optimization)
- Epochs:
- Learning rate:
- Beta:
- Max sequence length:
- LoRA Config: r=8, alpha=16 (merged into base)
Usage
Since this is a merged model, you can use it directly with transformers.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "your_id/your-repo-name"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Test inference
prompt = "Your question here"
inputs = tokenizer.apply_chat_template([{"role": "user", "content": prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Sources & License (IMPORTANT)
- Training Data: [u-10bei/dpo-dataset-qwen-cot]
- License: MIT License. (As per dataset terms).
- Compliance: Users must follow the original base model's license terms.
Model tree for duong942001/Qwen_DPO_SFTmodel
Base model
duong942001/SFTmodel