<【課題】qwen3-4b-dpo-qwen-cot-merged>
This model is a fine-tuned version of Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO) via the Unsloth library.
This repository contains the full-merged 16-bit weights. No adapter loading is required.
Training Objective
This model has been optimized using DPO to align its responses with preferred outputs, focusing on improving reasoning (Chain-of-Thought) and structured response quality based on the provided preference dataset.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: DPO (Direct Preference Optimization)
- Epochs: 2
- Learning rate: 1e-06
- Beta: 0.05
- Max sequence length: 4096
- LoRA Config: r=8, alpha=16 (merged into base)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Hi-Satoh/adv_MoE_ALF_sft3_merged"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
Sources & License (IMPORTANT)
- Training Data: [Hi-Satoh/test_dpo_dataset]
- License: MIT License. (As per dataset terms).
- Compliance: Users must follow the original base model's license terms.
- Downloads last month
- 8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for Hi-Satoh/adv_MoE_ALF_sft3_merged
Base model
Qwen/Qwen3-4B-Instruct-2507