u-10bei/dpo-dataset-qwen-cot
Viewer • Updated • 4.04k • 113 • 2
How to use taba0207/qwen3-4b-sft-dpo-structeval with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit")
model = PeftModel.from_pretrained(base_model, "taba0207/qwen3-4b-sft-dpo-structeval")How to use taba0207/qwen3-4b-sft-dpo-structeval with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for taba0207/qwen3-4b-sft-dpo-structeval to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for taba0207/qwen3-4b-sft-dpo-structeval to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for taba0207/qwen3-4b-sft-dpo-structeval to start chatting
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="taba0207/qwen3-4b-sft-dpo-structeval",
max_seq_length=2048,
)This model is a fine-tuned version of taba0207/qwen3-4b-structeval-u10bei512v5-v2 using Direct Preference Optimization (DPO) via the Unsloth library.
This repository contains the full-merged 16-bit weights. No adapter loading is required.
This model has been optimized using DPO to align its responses with preferred outputs, focusing on improving reasoning (Chain-of-Thought) and structured response quality based on the provided preference dataset.
Since this is a merged model, you can use it directly with transformers.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "taba0207/qwen3-4b-sft-dpo-structeval"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Test inference
prompt = "Your question here"
inputs = tokenizer.apply_chat_template[{"role": "user", "content": prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Base model
taba0207/qwen3-4b-structeval-u10bei512v5-v2