text stringlengths 0 5.54k |
|---|
image = Image.open(BytesIO(response.content)).convert("RGB") |
image.thumbnail((768, 768)) |
image = pipeline(prompt, image, num_inference_steps=200, strength=0.75, guidance_scale=10.5).images[0] |
image And if you want to do inpainting, then AutoPipelineForInpainting loads the underlying StableDiffusionInpaintPipeline class in the same way: Copied from diffusers import AutoPipelineForInpainting |
from diffusers.utils import load_image |
import torch |
pipeline = AutoPipelineForInpainting.from_pretrained( |
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, use_safetensors=True |
).to("cuda") |
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png" |
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png" |
init_image = load_image(img_url).convert("RGB") |
mask_image = load_image(mask_url).convert("RGB") |
prompt = "A majestic tiger sitting on a bench" |
image = pipeline(prompt, image=init_image, mask_image=mask_image, num_inference_steps=50, strength=0.80).images[0] |
image If you try to load an unsupported checkpoint, it’ll throw an error: Copied from diffusers import AutoPipelineForImage2Image |
import torch |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"openai/shap-e-img2img", torch_dtype=torch.float16, use_safetensors=True |
) |
"ValueError: AutoPipeline can't find a pipeline linked to ShapEImg2ImgPipeline for None" Use multiple pipelines For some workflows or if you’re loading many pipelines, it is more memory-efficient to reuse the same components from a checkpoint instead of reloading them which would unnecessarily consume additional memor... |
import torch |
pipeline_text2img = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True |
) |
print(type(pipeline_text2img)) |
"<class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'>" Then from_pipe() maps the original "stable-diffusion" pipeline class to StableDiffusionImg2ImgPipeline: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) |
print(type(pipeline_img2img)) |
"<class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_img2img.StableDiffusionImg2ImgPipeline'>" If you passed an optional argument - like disabling the safety checker - to the original pipeline, this argument is also passed on to the new pipeline: Copied from diffusers import AutoPipelineForText2Ima... |
import torch |
pipeline_text2img = AutoPipelineForText2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", |
torch_dtype=torch.float16, |
use_safetensors=True, |
requires_safety_checker=False, |
).to("cuda") |
pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img) |
print(pipeline_img2img.config.requires_safety_checker) |
"False" You can overwrite any of the arguments and even configuration from the original pipeline if you want to change the behavior of the new pipeline. For example, to turn the safety checker back on and add the strength argument: Copied pipeline_img2img = AutoPipelineForImage2Image.from_pipe(pipeline_text2img, requ... |
print(pipeline_img2img.config.requires_safety_checker) |
"True" |
Image-to-image Image-to-image is similar to text-to-image, but in addition to a prompt, you can also pass an initial image as a starting point for the diffusion process. The initial image is encoded to latent space and noise is added to it. Then the latent diffusion model takes a prompt and the noisy latent ... |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import load_image, make_image_grid |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"kandinsky-community/kandinsky-2-2-decoder", torch_dtype=torch.float16, use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() You’ll notice throughout the guide, we use enable_model_cpu_offload() and enable_xformers_memory_efficient_attention(), to save memory and increase inference speed. If you’re using PyTorch 2.0, then you don’t need to call enable_xformers_memory_efficient_attention()... |
image = pipeline(prompt, image=init_image).images[0] |
make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Popular models The most popular image-to-image models are Stable Diffusion v1.5, Stable Diffusion XL (SDXL), and Kandinsky 2.2. The results from the Stable Diffusion and Kandinsky models vary due to their architecture differences and ... |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import make_image_grid, load_image |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png" |
init_image = load_image(url) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
# pass prompt and image to pipeline |
image = pipeline(prompt, image=init_image).images[0] |
make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Stable Diffusion XL (SDXL) SDXL is a more powerful version of the Stable Diffusion model. It uses a larger base model, and an additional refiner model to increase the quality of the base model’s output. Read the SDXL guide for a more ... |
from diffusers import AutoPipelineForImage2Image |
from diffusers.utils import make_image_grid, load_image |
pipeline = AutoPipelineForImage2Image.from_pretrained( |
"stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True |
) |
pipeline.enable_model_cpu_offload() |
# remove following line if xFormers is not installed or you have PyTorch 2.0 or higher installed |
pipeline.enable_xformers_memory_efficient_attention() |
# prepare image |
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-sdxl-init.png" |
init_image = load_image(url) |
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" |
# pass prompt and image to pipeline |
image = pipeline(prompt, image=init_image, strength=0.5).images[0] |
make_image_grid([init_image, image], rows=1, cols=2) initial image generated image Kandinsky 2.2 The Kandinsky model is different from the Stable Diffusion models because it uses an image prior model to create image embeddings. The embeddings help create a better alignment between text and images, allowing the laten... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.