| --- |
| base_model: Qwen/Qwen3-4B-Instruct-2507 |
| datasets: |
| - daichira/structured-5k-mix-sft |
| language: |
| - en |
| license: apache-2.0 |
| library_name: peft |
| pipeline_tag: text-generation |
| tags: |
| - qlora |
| - lora |
| - structured-output |
| --- |
| |
| <【課題】Message アラフィフからAIの勉強を始めて、試行錯誤しながら育てたモデルです(^▽^)/ 精度(スコア)を上げるために、パラメータを少しずつ調整して頑張りました!> |
|
|
| This repository provides a **LoRA adapter** fine-tuned from |
| **Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**. |
|
|
| This repository contains **LoRA adapter weights only**. |
| The base model must be loaded separately. |
|
|
| ## Training Objective |
|
|
| This adapter is trained to improve **structured output accuracy** |
| (JSON / YAML / XML / TOML / CSV). |
|
|
| Loss is applied only to the final assistant output, |
| while intermediate reasoning (Chain-of-Thought) is masked. |
|
|
| ## Training Configuration |
|
|
| - Base model: Qwen/Qwen3-4B-Instruct-2507 |
| - Method: QLoRA (4-bit) |
| - Max sequence length: 512 |
| - Epochs: 2 |
| - Learning rate: 1e-06 |
| - LoRA: r=128, alpha=256 |
|
|
| ## Usage |
|
|
| ```python |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| from peft import PeftModel |
| import torch |
| |
| base = "Qwen/Qwen3-4B-Instruct-2507" |
| adapter = "your_id/your-repo" |
| |
| tokenizer = AutoTokenizer.from_pretrained(base) |
| model = AutoModelForCausalLM.from_pretrained( |
| base, |
| torch_dtype=torch.float16, |
| device_map="auto", |
| ) |
| model = PeftModel.from_pretrained(model, adapter) |
| ``` |
|
|
| ## Sources & Terms (IMPORTANT) |
|
|
| Training data: daichira/structured-5k-mix-sft |
|
|
| Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License. |
| Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use. |
|
|