Instructions to use ozocalan/urbanpark with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use ozocalan/urbanpark with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("ozocalan/urbanpark") prompt = "a man posing in an urbanpark" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
import torch
from diffusers import DiffusionPipeline
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda")
pipe.load_lora_weights("ozocalan/urbanpark")
prompt = "a man posing in an urbanpark"
image = pipe(prompt).images[0]urbanpark
Model description
Where everything is made of concrete.
Trigger words
You should use urbanpark to trigger the image generation.
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Training at fal.ai
Training was done using fal.ai/models/fal-ai/flux-lora-general-training.
- Downloads last month
- 6
Model tree for ozocalan/urbanpark
Base model
black-forest-labs/FLUX.1-dev