takeyy's picture
Update README
9ac49ed verified
metadata
language:
  - ja
  - en
license: other
library_name: peft
base_model: Qwen/Qwen3-4B-Instruct-2507
tags:
  - lora
  - peft
  - text-generation
  - structeval
  - qwen
pipeline_tag: text-generation

lora_structeval_t_qwen3_4b_v2_new

Model Description

This repository provides a LoRA adapter for StructEval-related text generation tasks.

  • Developed by: takeyy
  • Model type: LoRA adapter
  • Base model: Qwen/Qwen3-4B-Instruct-2507
  • Languages: Japanese, English

Intended Use

This model was developed for the final competition of the 2025 Advanced Large Language Model course. It is intended for structured text generation and StructEval-style evaluation tasks.

Important Note

This repository contains LoRA adapter weights only. The base model must be loaded separately.

How to Use

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model_name = "Qwen/Qwen3-4B-Instruct-2507"
adapter_name = "takeyy/lora_structeval_t_qwen3_4b_v2_new"

tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(base_model_name)
model = PeftModel.from_pretrained(base_model, adapter_name)

prompt = "以下の指示に従って構造化して出力してください。"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs, skip_special_tokens=True))