yoiko-Qwen2.5-7B-Instruct-lora
yoiko-Qwen2.5-7B-Instruct-lora is a LoRA adapter package for Qwen/Qwen2.5-7B-Instruct.
This package contains the adapter only.
It does not include:
- the Qwen base model
- training data
- internal experiment logs
- Stable Diffusion Forge extension code
Intended use
This adapter is intended for prompt generation / prompt rewriting workflows, including use inside a Stable Diffusion Forge extension.
Stable diffusion-forge-add-on https://github.com/yoikoarmor/sd-forge-llm-prompt-gen-yoiko
Base model
Expected base model:
Qwen/Qwen2.5-7B-Instruct
Users must obtain the base model separately and must review the base model's own license and usage terms.
License
This LoRA adapter package is released under the Apache License 2.0.
This license applies to the adapter package contents in this directory.
The base model is not included here and may have its own separate license.
Recommended inference conditions
Typical tested setup:
- 4-bit base model loading
- NF4 quantization
- bfloat16 compute dtype
device_map="auto"- adapter tokenizer / adapter chat template when available
Files in this package
adapter_model.safetensorsadapter_config.jsonchat_template.jinjatokenizer.jsontokenizer_config.jsonREADME.mdLICENSE
Minimal load example
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
base_model = "Qwen/Qwen2.5-7B-Instruct"
adapter_path = "yoikoarmor/yoiko-Qwen2.5-7B-Instruct-lora"
tokenizer = AutoTokenizer.from_pretrained(adapter_path, trust_remote_code=False)
quant_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
base_model,
device_map="auto",
quantization_config=quant_config,
torch_dtype=torch.bfloat16,
trust_remote_code=False,
)
model = PeftModel.from_pretrained(model, adapter_path, is_trainable=False)
model.eval()
Known limitations
- This is an adapter package, not a standalone model
- Prompt quality still depends on prompting style and generation settings
- Stable Diffusion Forge integration behavior may differ from standalone scripts
Migration note
Old local experimental naming may still exist in private/local workspaces.
- old experimental name:
fold4_best_eval_adapter - public release name:
yoiko-Qwen2.5-7B-Instruct-lora
- Downloads last month
- 44