File size: 1,786 Bytes
2ffb29f 30c74ba 2ffb29f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | ---
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
- daichira/structured-5k-mix-sft
language:
- en
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
- qlora
- lora
- structured-output
---
<【課題】# Model Card for qwen-final
This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA.
松尾研LLM講座2025の最終課題で作成したモデルです。>
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
This repository contains **LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured output accuracy**
(JSON / YAML / XML / TOML / CSV).
Loss is applied only to the final assistant output,
while intermediate reasoning (Chain-of-Thought) is masked.
## Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Method: QLoRA (4-bit)
- Max sequence length: 512
- Epochs: 1
- Learning rate: 1e-06
- LoRA: r=64, alpha=128
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "your_id/your-repo"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
base,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
```
## Sources & Terms (IMPORTANT)
Training data: daichira/structured-5k-mix-sft
Dataset License: MIT License. This dataset is used and distributed under the terms of the MIT License.
Compliance: Users must comply with the MIT license (including copyright notice) and the base model's original terms of use.
|