text stringlengths 0 5.54k |
|---|
→ |
ImagePipelineOutput or tuple |
Parameters |
batch_size (int, optional, defaults to 1) — |
Number of images to generate. |
generator (torch.Generator, optional) — |
One or a list of torch generator(s) |
to make generation deterministic. |
num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. |
output_type (str, optional, defaults to "pil") — |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. |
return_dict (bool, optional, defaults to True) — |
Whether or not to return a ImagePipelineOutput instead of a plain tuple. |
Returns |
ImagePipelineOutput or tuple |
~pipelines.utils.ImagePipelineOutput if return_dict is |
True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. |
JAX/Flax 🤗 Diffusers supports Flax for super fast inference on Google TPUs, such as those available in Colab, Kaggle or Google Cloud Platform. This guide shows you how to run inference with Stable Diffusion using JAX/Flax. Before you begin, make sure you have the necessary libraries installed: Copied # un... |
#!pip install -q jax==0.3.25 jaxlib==0.3.25 flax transformers ftfy |
#!pip install -q diffusers You should also make sure you’re using a TPU backend. While JAX does not run exclusively on TPUs, you’ll get the best performance on a TPU because each server has 8 TPU accelerators working in parallel. If you are running this guide in Colab, select Runtime in the menu above, select the optio... |
import jax.tools.colab_tpu |
jax.tools.colab_tpu.setup_tpu() |
num_devices = jax.device_count() |
device_type = jax.devices()[0].device_kind |
print(f"Found {num_devices} JAX devices of type {device_type}.") |
assert ( |
"TPU" in device_type, |
"Available device is not a TPU, please select TPU from Runtime > Change runtime type > Hardware accelerator" |
) |
# Found 8 JAX devices of type Cloud TPU. Great, now you can import the rest of the dependencies you’ll need: Copied import jax.numpy as jnp |
from jax import pmap |
from flax.jax_utils import replicate |
from flax.training.common_utils import shard |
from diffusers import FlaxStableDiffusionPipeline Load a model Flax is a functional framework, so models are stateless and parameters are stored outside of them. Loading a pretrained Flax pipeline returns both the pipeline and the model weights (or parameters). In this guide, you’ll use bfloat16, a more efficient half... |
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained( |
"CompVis/stable-diffusion-v1-4", |
revision="bf16", |
dtype=dtype, |
) Inference TPUs usually have 8 devices working in parallel, so let’s use the same prompt for each device. This means you can perform inference on 8 devices at once, with each device generating one image. As a result, you’ll get 8 images in the same amount of time it takes for one chip to generate a single image! Lear... |
prompt = [prompt] * jax.device_count() |
prompt_ids = pipeline.prepare_inputs(prompt) |
prompt_ids.shape |
# (8, 77) Model parameters and inputs have to be replicated across the 8 parallel devices. The parameters dictionary is replicated with flax.jax_utils.replicate which traverses the dictionary and changes the shape of the weights so they are repeated 8 times. Arrays are replicated using shard. Copied # parameters |
p_params = replicate(params) |
# arrays |
prompt_ids = shard(prompt_ids) |
prompt_ids.shape |
# (8, 1, 77) This shape means each one of the 8 devices receives as an input a jnp array with shape (1, 77), where 1 is the batch size per device. On TPUs with sufficient memory, you could have a batch size larger than 1 if you want to generate multiple images (per chip) at once. Next, create a random number generator ... |
return jax.random.PRNGKey(seed) The helper function, or rng, is split 8 times so each device receives a different generator and generates a different image. Copied rng = create_key(0) |
rng = jax.random.split(rng, jax.device_count()) To take advantage of JAX’s optimized speed on a TPU, pass jit=True to the pipeline to compile the JAX code into an efficient representation and to ensure the model runs in parallel across the 8 devices. You need to ensure all your inputs have the same shape in subsequent ... |
images = pipeline(prompt_ids, p_params, rng, jit=True)[0] |
# CPU times: user 56.2 s, sys: 42.5 s, total: 1min 38s |
# Wall time: 1min 29s The returned array has shape (8, 1, 512, 512, 3) which should be reshaped to remove the second dimension and get 8 images of 512 × 512 × 3. Then you can use the numpy_to_pil() function to convert the arrays into images. Copied from diffusers.utils import make_image_grid |
images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) |
images = pipeline.numpy_to_pil(images) |
make_image_grid(images, rows=2, cols=4) Using different prompts You don’t necessarily have to use the same prompt on all devices. For example, to generate 8 different prompts: Copied prompts = [ |
"Labrador in the style of Hokusai", |
"Painting of a squirrel skating in New York", |
"HAL-9000 in the style of Van Gogh", |
"Times Square under water, with fish and a dolphin swimming around", |
"Ancient Roman fresco showing a man working on his laptop", |
"Close-up photograph of young black woman against urban background, high quality, bokeh", |
"Armchair in the shape of an avocado", |
"Clown astronaut in space, with Earth in the background", |
] |
prompt_ids = pipeline.prepare_inputs(prompts) |
prompt_ids = shard(prompt_ids) |
images = pipeline(prompt_ids, p_params, rng, jit=True).images |
images = images.reshape((images.shape[0] * images.shape[1],) + images.shape[-3:]) |
images = pipeline.numpy_to_pil(images) |
make_image_grid(images, 2, 4) How does parallelization work? The Flax pipeline in 🤗 Diffusers automatically compiles the model and runs it in parallel on all available devices. Let’s take a closer look at how that process works. JAX parallelization can be done in multiple ways. The easiest one revolves around using ... |
images = p_generate(prompt_ids, p_params, rng) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.