text stringlengths 0 5.54k |
|---|
torch_dtype=torch.float16, |
use_safetensors=True, |
).to("cuda") Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated: Copied url = "http://images.cocodataset.org/val2017/000000039769.jpg" |
init_image = load_image(url) |
prompt = "two tigers" |
negative_prompt = "bad, deformed, ugly, bad anatomy" |
image = pipeline(prompt=prompt, image=init_image, negative_prompt=negative_prompt, strength=0.7).images[0] |
make_image_grid([init_image, image], rows=1, cols=2) Input Output |
Unconditional Latent Diffusion |
Overview |
Unconditional Latent Diffusion was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. |
The abstract of the paper is the following: |
By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. Howev... |
The original codebase can be found here. |
Tips: |
Available Pipelines: |
Pipeline |
Tasks |
Colab |
pipeline_latent_diffusion_uncond.py |
Unconditional Image Generation |
- |
Examples: |
LDMPipeline |
class diffusers.LDMPipeline |
< |
source |
> |
( |
vqvae: VQModel |
unet: UNet2DModel |
scheduler: DDIMScheduler |
) |
Parameters |
vqvae (VQModel) — |
Vector-quantized (VQ) Model to encode and decode images to and from latent representations. |
unet (UNet2DModel) — U-Net architecture to denoise the encoded image latents. |
scheduler (SchedulerMixin) — |
DDIMScheduler is to be used in combination with unet to denoise the encoded image latents. |
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) |
__call__ |
< |
source |
> |
( |
batch_size: int = 1 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
eta: float = 0.0 |
num_inference_steps: int = 50 |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
**kwargs |
) |
→ |
ImagePipelineOutput or tuple |
Parameters |
batch_size (int, optional, defaults to 1) — |
Number of images to generate. |
generator (torch.Generator, optional) — |
One or a list of torch generator(s) |
to make generation deterministic. |
num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.