Instructions to use Atomik31/CLNLORA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Atomik31/CLNLORA with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("Atomik31/CLNLORA") prompt = "CLNLORA" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("Atomik31/CLNLORA")
prompt = "CLNLORA"
image = pipe(prompt).images[0]CLNLORA
Model description
Trigger words
You should use CLNLORA to trigger the image generation.
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Training at fal.ai
Training was done using fal.ai/models/fal-ai/flux-lora-general-training.
- Downloads last month
- 26
Model tree for Atomik31/CLNLORA
Base model
black-forest-labs/FLUX.1-dev