<【課題】ここは自分で記入して下さい>
This model is a fine-tuned version of Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO) via the Unsloth library.
This repository contains the full-merged 16-bit weights. No adapter loading is required.
Training Objective
This model has been optimized using DPO to align its responses with preferred outputs, focusing on improving reasoning (Chain-of-Thought) and structured response quality based on the provided preference dataset.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: DPO (Direct Preference Optimization)
- Epochs: 2
- Learning rate: 5e-07
- Beta: 0.5
- Max sequence length: 4096
- LoRA Config: r=8, alpha=16 (merged into base)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Hi-Satoh/adv_sft_dpo_final_2_merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
Sources & License (IMPORTANT)
- Training Data: [Hi-Satoh/test_dpo_dataset]
- License: MIT License. (As per dataset terms).
- Compliance: Users must follow the original base model's license terms.
- Downloads last month
- 14
Model tree for Hi-Satoh/adv_sft_dpo_final_2_merged
Base model
Qwen/Qwen3-4B-Instruct-2507