junfukuda's picture
Upload LoRA adapter - exp007 (structured-hard-sft-4k)
7bce648 verified
---
base_model: Qwen/Qwen3-4B-Instruct-2507
datasets:
- daichira/structured-hard-sft-4k
language:
- en
- ja
license: apache-2.0
library_name: peft
pipeline_tag: text-generation
tags:
- qlora
- lora
- structured-output
- structeval
---
# Qwen3-4B StructEval exp007 - structured-hard-sft-4k
This repository provides a **LoRA adapter** fine-tuned from
**Qwen/Qwen3-4B-Instruct-2507** using **QLoRA (4-bit, Unsloth)**.
**This repository contains LoRA adapter weights only**.
The base model must be loaded separately.
## Training Objective
This adapter is trained to improve **structured output accuracy**
(JSON / YAML / XML / TOML / CSV).
Loss is applied only to the final assistant output,
while intermediate reasoning (Chain-of-Thought) is masked.
## Training Configuration
- **Experiment ID**: exp007
- **Base model**: Qwen/Qwen3-4B-Instruct-2507
- **Training dataset**: daichira/structured-hard-sft-4k
- **Method**: QLoRA (4-bit)
- **Max sequence length**: 1024
- **Epochs**: 2
- **Learning rate**: 5e-05
- **LoRA parameters**: r=16, alpha=32
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
base = "Qwen/Qwen3-4B-Instruct-2507"
adapter = "junfukuda/qwen3-structeval-exp007-hard4k"
tokenizer = AutoTokenizer.from_pretrained(base)
model = AutoModelForCausalLM.from_pretrained(
base,
torch_dtype=torch.float16,
device_map="auto",
)
model = PeftModel.from_pretrained(model, adapter)
```
## Sources & Terms (IMPORTANT)
**Training data**: daichira/structured-hard-sft-4k
**Dataset License**: The dataset used for training is subject to its original license terms.
Please refer to the dataset repository for specific license information.
**Compliance**: Users must comply with both the dataset's license terms and the base model's original terms of use.
## Competition Context
This model was developed as part of the StructEval competition, focusing on accurate structured output generation.