text stringlengths 0 5.54k |
|---|
init_image = load_image(url) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
# pass prompt and image to pipeline |
image = pipeline(prompt, image=init_image, guidance_scale=8.0).images[0] |
make_image_grid([init_image, image], rows=1, cols=2) guidance_scale = 0.1 guidance_scale = 5.0 guidance_scale = 10.0 Negative prompt A negative prompt conditions the model to not include things in an image, and it can be used to improve image quality or modify an image. For example, you can improve image quality by including negative prompts like “poor details” or “blurry” to encourage the model to generate a higher quality image. Or you can modify an image by specifying things to exclude from an image. Copied import torch |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import make_image_grid, load_image |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" |
init_image = load_image(url) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" |
# pass prompt and image to pipeline |
image = pipeline(prompt, negative_prompt=negative_prompt, image=init_image).images[0] |
make_image_grid([init_image, image], rows=1, cols=2) negative_prompt = "ugly, deformed, disfigured, poor details, bad anatomy" negative_prompt = "jungle" Chained image-to-image pipelines There are some other interesting ways you can use an image-to-image pipeline aside from just generating an image (although that is pretty cool too). You can take it a step further and chain it with other pipelines. Text-to-image-to-image Chaining a text-to-image and image-to-image pipeline allows you to generate an image from text and use the generated image as the initial image for the image-to-image pipeline. This is useful if you want to generate an image entirely from scratch. For example, let’s chain a Stable Diffusion and a Kandinsky model. Start by generating an image with the text-to-image pipeline: Copied from diffusers import AutoPipelineForText2Image, AutoPipelineForImage2Image |
import torch |
from diffusers.utils import make_image_grid |
pipeline = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
text2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k").images[0] |
text2image Now you can pass this generated image to the image-to-image pipeline: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( |
"kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
image2image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=text2image).images[0] |
make_image_grid([text2image, image2image], rows=1, cols=2) Image-to-image-to-image You can also chain multiple image-to-image pipelines together to create more interesting images. This can be useful for iteratively performing style transfer on an image, generating short GIFs, restoring color to an image, or restoring missing areas of an image. Start by generating an image: Copied import torch |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import make_image_grid, load_image |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" |
init_image = load_image(url) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
# pass prompt and image to pipeline |
image = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Pass the latent output from this pipeline to the next pipeline to generate an image in a comic book art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( |
"ogkalu/Comic-Diffusion", torch_dtype=torch.float16 |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# need to include the token "charliebo artstyle" in the prompt to use this checkpoint |
image = pipeline("Astronaut in a jungle, charliebo artstyle", image=image, output_type="latent").images[0] Repeat one more time to generate the final image in a pixel art style: Copied pipeline = AutoPipelineForImage2Image.from_pretrained( |
"kohbanye/pixel-art-style", torch_dtype=torch.float16 |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# need to include the token "pixelartstyle" in the prompt to use this checkpoint |
image = pipeline("Astronaut in a jungle, pixelartstyle", image=image).images[0] |
make_image_grid([init_image, image], rows=1, cols=2) Image-to-upscaler-to-super-resolution Another way you can chain your image-to-image pipeline is with an upscaler and super-resolution pipeline to really increase the level of details in an image. Start with an image-to-image pipeline: Copied import torch |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import make_image_grid, load_image |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" |
init_image = load_image(url) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
# pass prompt and image to pipeline |
image_1 = pipeline(prompt, image=init_image, output_type="latent").images[0] It is important to specify output_type="latent" in the pipeline to keep all the outputs in latent space to avoid an unnecessary decode-encode step. This only works if the chained pipelines are using the same VAE. Chain it to an upscaler pipeline to increase the image resolution: Copied from diffusers import StableDiffusionLatentUpscalePipeline |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.