Qwen3-4B-Instruct-DPO-Merged-200step
This model is a fine-tuned version of Qwen/Qwen3-4B-Instruct using Direct Preference Optimization (DPO) via the Unsloth library.
The model has been merged (16-bit), so you can use it directly with the transformers library without additional adapters.
Training Details
- Method: DPO (Direct Preference Optimization)
- Base Model: Qwen/Qwen3-4B-Instruct-2507
- Dataset: u-10bei/dpo-dataset-qwen-cot (Chain-of-Thought)
- Framework: Unsloth
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "takenoko888/Qwen3-4B-DPO-Merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = "思考の連鎖を使って、次の問題を解いてください。\nリンゴが3個あります..."
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 5
Model tree for takenoko888/Qwen3-4B-DPO-Merged
Base model
Qwen/Qwen3-4B-Instruct-2507