<LLM2026_DPO_finalv5>
This model is a fine-tuned version of unsloth/Qwen2.5-7B-Instruct-bnb-4bit using Direct Preference Optimization (DPO) via the Unsloth library.
This repository contains LoRA adapter weights only. The base model must be loaded separately.
Training Objective
This repository contains LoRA adapter weights only. The base model must be loaded separately using the provided inference code.
Training Configuration
- Base model: unsloth/Qwen2.5-7B-Instruct-bnb-4bit
- Method: DPO (Direct Preference Optimization)
- Epochs: 1
- Learning rate: 1e-06
- Beta: 0.05
- Max sequence length: 1024
- LoRA Config: r=64, alpha=128 (LoRA adapter)
Usage
This is a LoRA adapter model. Use it with the base model using the PEFT library.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = f"{repo_id}" # 自動的に今回のリポジトリ名が入るように修正
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto"
)
# Test inference
prompt = "Your question here"
inputs = tokenizer.apply_chat_template([{"role": "user", "content": prompt}], tokenize=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
Sources & License (IMPORTANT)
- Training Data: [u-10bei/dpo-dataset-qwen-cot]
- License: MIT License. (As per dataset terms).
- Compliance: Users must follow the original base model's license terms.
Model tree for makotonlo/LLM2026_DPO_finalv5
Base model
Qwen/Qwen2.5-7B
Finetuned
Qwen/Qwen2.5-7B-Instruct
Quantized
unsloth/Qwen2.5-7B-Instruct-bnb-4bit