<【課題】sft-base2-dpo-qwen-cot-merged>
This model is a fine-tuned version of Qwen/Qwen3-4B-Instruct-2507 using Direct Preference Optimization (DPO) via the Unsloth library.
This repository contains the full-merged 16-bit weights. No adapter loading is required.
Training Objective
This repository provides a LoRA (Low-Rank Adaptation) adapter for [Base Model Name]. This model has been specifically fine-tuned through a multi-stage process, starting from SFT (Supervised Fine-Tuning) followed by DPO (Direct Preference Optimization) to enhance its reasoning and alignment capabilities.
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- SFT Phase Firest, the base model was trained using the [Dataset Name/Description] to acquire domain-specific knowledge. (See: Hi-Satoh/sft-base4-qwen-adapter)
- DPO Phase
- Method: DPO (Direct Preference Optimization)
- Epochs: 1
- Learning rate: 1e-07
- Beta: 0.1
- Max sequence length: 2048
- LoRA Config: r=8, alpha=16 (merged into base)
Usage
Since this is a merged model, you can use it directly with transformers.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "your_id/your-repo-name"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Test inference
prompt = "Your question here"
inputs = tokenizer.apply_chat_template([{"role": "user", "content": prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Sources & License (IMPORTANT)
- Training Data: [u-10bei/dpo-dataset-qwen-cot]
- License: MIT License. (As per dataset terms).
- Compliance: Users must follow the original base model's license terms.
- Downloads last month
- 5
Model tree for Hi-Satoh/sft-base4-dpo-qwen-cot-merged
Base model
Qwen/Qwen3-4B-Instruct-2507