How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("Qwen/Qwen-Image", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("PakNin/chao-hu")

prompt = "巢湖、渔船、渔民撒网"
image = pipe(prompt).images[0]

chao-hu

Model trained with AI Toolkit by Ostris

Prompt
巢湖、渔船、渔民撒网
Prompt
巢湖、渔船、渔民撒网、國畫風
Prompt
巢湖、渔船、渔民撒网、两岸青山、裕溪河田园山景

Trigger words

You should use 巢湖 to trigger the image generation.

Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.

Weights for this model are available in Safetensors format.

Download them in the Files & versions tab.

Use it with the 🧨 diffusers library

from diffusers import AutoPipelineForText2Image
import torch

pipeline = AutoPipelineForText2Image.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('PakNin/chao-hu', weight_name='chao-hu.safetensors')
image = pipeline('巢湖、渔船、渔民撒网').images[0]
image.save("my_image.png")

For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers

Downloads last month
28
Inference Providers NEW
Examples

Model tree for PakNin/chao-hu

Base model

Qwen/Qwen-Image
Adapter
(469)
this model