qwen3-4b-sft-dpo-structeval_v2

This model is a fine-tuned version of taba0207/qwen3-4b-structeval-u10bei512v5-v2 using Direct Preference Optimization (DPO) via the Unsloth library.

This repository contains the full-merged 16-bit weights. No adapter loading is required.

Training Objective

This model has been optimized using DPO to align its responses with preferred outputs, focusing on improving reasoning (Chain-of-Thought) and structured response quality based on the provided preference dataset.

Training Configuration

  • Base model: taba0207/qwen3-4b-structeval-u10bei512v5-v2
  • Method: DPO (Direct Preference Optimization)
  • Epochs: 2
  • Learning rate: 5e-07
  • Beta: 0.05
  • Max sequence length: 1024
  • LoRA Config: r=8, alpha=16 (merged into base)

Usage

Since this is a merged model, you can use it directly with transformers.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "your_id/your-repo-name"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Test inference
prompt = "Your question here"
inputs = tokenizer.apply_chat_template[{"role": "user", "content": prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))

Sources & License (IMPORTANT)

  • Training Data: [u-10bei/dpo-dataset-qwen-cot]
  • License: MIT License. (As per dataset terms).
  • Compliance: Users must follow the original base model's license terms.

Framework versions

  • PEFT 0.13.2
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train taba0207/qwen3-4b-sft-dpo-structeval_v2