text stringlengths 0 5.54k |
|---|
returned, otherwise a tuple is returned where the first element is a list with the generated images |
and the second element is a list of bools indicating whether the corresponding generated image |
contains “not-safe-for-work” (nsfw) content. |
The call function to the pipeline for generation. Examples: Copied >>> import jax |
>>> import numpy as np |
>>> import jax.numpy as jnp |
>>> from flax.jax_utils import replicate |
>>> from flax.training.common_utils import shard |
>>> from diffusers.utils import load_image, make_image_grid |
>>> from PIL import Image |
>>> from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel |
>>> def create_key(seed=0): |
... return jax.random.PRNGKey(seed) |
>>> rng = create_key(0) |
>>> # get canny image |
>>> canny_image = load_image( |
... "https://huggingface.co/datasets/YiYiXu/test-doc-assets/resolve/main/blog_post_cell_10_output_0.jpeg" |
... ) |
>>> prompts = "best quality, extremely detailed" |
>>> negative_prompts = "monochrome, lowres, bad anatomy, worst quality, low quality" |
>>> # load control net and stable diffusion v1-5 |
>>> controlnet, controlnet_params = FlaxControlNetModel.from_pretrained( |
... "lllyasviel/sd-controlnet-canny", from_pt=True, dtype=jnp.float32 |
... ) |
>>> pipe, params = FlaxStableDiffusionControlNetPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", controlnet=controlnet, revision="flax", dtype=jnp.float32 |
... ) |
>>> params["controlnet"] = controlnet_params |
>>> num_samples = jax.device_count() |
>>> rng = jax.random.split(rng, jax.device_count()) |
>>> prompt_ids = pipe.prepare_text_inputs([prompts] * num_samples) |
>>> negative_prompt_ids = pipe.prepare_text_inputs([negative_prompts] * num_samples) |
>>> processed_image = pipe.prepare_image_inputs([canny_image] * num_samples) |
>>> p_params = replicate(params) |
>>> prompt_ids = shard(prompt_ids) |
>>> negative_prompt_ids = shard(negative_prompt_ids) |
>>> processed_image = shard(processed_image) |
>>> output = pipe( |
... prompt_ids=prompt_ids, |
... image=processed_image, |
... params=p_params, |
... prng_seed=rng, |
... num_inference_steps=50, |
... neg_prompt_ids=negative_prompt_ids, |
... jit=True, |
... ).images |
>>> output_images = pipe.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) |
>>> output_images = make_image_grid(output_images, num_samples // 4, 4) |
>>> output_images.save("generated_image.png") FlaxStableDiffusionControlNetPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) — |
Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — |
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content |
or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values. |
Kandinsky 2.2 This script is experimental, and it’s easy to overfit and run into issues like catastrophic forgetting. Try exploring different hyperparameters to get the best results on your dataset. Kandinsky 2.2 is a multilingual text-to-image model capable of producing more photorealistic images. The model includes a... |
cd diffusers |
pip install . Then navigate to the example folder containing the training script and install the required dependencies for the script you’re using: Copied cd examples/kandinsky2_2/text_to_image |
pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: ... |
write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training scripts that are important for understanding how to modify it... |
--mixed_precision="fp16" Most of the parameters are identical to the parameters in the Text-to-image training guide, so let’s get straight to a walkthrough of the Kandinsky training scripts! Min-SNR weighting The Min-SNR weighting strategy can help with training by rebalancing the loss to achieve faster convergence.... |
--snr_gamma=5.0 Training script The training script is also similar to the Text-to-image training guide, but it’s been modified to support training the prior and decoder models. This guide focuses on the code that is unique to the Kandinsky 2.2 training scripts. prior model decoder model The main() function contain... |
image_processor = CLIPImageProcessor.from_pretrained( |
args.pretrained_prior_model_name_or_path, subfolder="image_processor" |
) |
tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="tokenizer") |
with ContextManagers(deepspeed_zero_init_disabled_context_manager()): |
image_encoder = CLIPVisionModelWithProjection.from_pretrained( |
args.pretrained_prior_model_name_or_path, subfolder="image_encoder", torch_dtype=weight_dtype |
).eval() |
text_encoder = CLIPTextModelWithProjection.from_pretrained( |
args.pretrained_prior_model_name_or_path, subfolder="text_encoder", torch_dtype=weight_dtype |
).eval() Kandinsky uses a PriorTransformer to generate the image embeddings, so you’ll want to setup the optimizer to learn the prior mode’s parameters. Copied prior = PriorTransformer.from_pretrained(args.pretrained_prior_model_name_or_path, subfolder="prior") |
prior.train() |
optimizer = optimizer_cls( |
prior.parameters(), |
lr=args.learning_rate, |
betas=(args.adam_beta1, args.adam_beta2), |
weight_decay=args.adam_weight_decay, |
eps=args.adam_epsilon, |
) Next, the input captions are tokenized, and images are preprocessed by the CLIPImageProcessor: Copied def preprocess_train(examples): |
images = [image.convert("RGB") for image in examples[image_column]] |
examples["clip_pixel_values"] = image_processor(images, return_tensors="pt").pixel_values |
examples["text_input_ids"], examples["text_mask"] = tokenize_captions(examples) |
return examples Finally, the training loop converts the input images into latents, adds noise to the image embeddings, and makes a prediction: Copied model_pred = prior( |
noisy_latents, |
timestep=timesteps, |
proj_embedding=prompt_embeds, |
encoder_hidden_states=text_encoder_hidden_states, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.