Text-to-Image
Diffusers
TensorBoard
diffusers-training
qwen-image
qwen-image-diffusers
template:sd-lora
Instructions to use linoyts/lora_jobs_test_2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use linoyts/lora_jobs_test_2 with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("linoyts/lora_jobs_test_2", dtype=torch.bfloat16, device_map="cuda") prompt = "yoda, yarn art style" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
HiDream Image DreamBooth LoRA - linoyts/lora_jobs_test_2
Model description
These are linoyts/lora_jobs_test_2 DreamBooth LoRA weights for Qwen/Qwen-Image.
The weights were trained using DreamBooth with the Qwen Image diffusers trainer.
Trigger words
You should use yoda, yarn art style to trigger the image generation.
Download model
Download the *.safetensors LoRA in the Files & versions tab.
Use it with the 🧨 diffusers library
>>> import torch
>>> from diffusers import QwenImagePipeline
>>> pipe = QwenImagePipeline.from_pretrained(
... "Qwen/Qwen-Image",
... torch_dtype=torch.bfloat16,
... )
>>> pipe.enable_model_cpu_offload()
>>> pipe.load_lora_weights(f"linoyts/lora_jobs_test_2")
>>> image = pipe(f"yoda, yarn art style").images[0]
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers
Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 38
Model tree for linoyts/lora_jobs_test_2
Base model
Qwen/Qwen-Image