How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("aakashrajaraman/output", dtype=torch.bfloat16, device_map="cuda")

prompt = "<new1> photo of traffic"
image = pipe(prompt).images[0]

Custom Diffusion - aakashrajaraman/output

These are Custom Diffusion adaption weights for CompVis/stable-diffusion-v1-4. The weights were trained on photo of traffic using Custom Diffusion. You can find some example images in the following.

For more details on the training, please follow this link.

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for aakashrajaraman/output

Adapter
(591)
this model