text
stringlengths
0
5.54k
>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> response = requests.get(url)
>>> init_img = Image.open(BytesIO(response.content)).convert("RGB")
>>> init_img = init_img.resize((768, 512))
>>> prompts = "A fantasy landscape, trending on artstation"
>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained(
... "CompVis/stable-diffusion-v1-4",
... revision="flax",
... dtype=jnp.bfloat16,
... )
>>> num_samples = jax.device_count()
>>> rng = jax.random.split(rng, jax.device_count())
>>> prompt_ids, processed_image = pipeline.prepare_inputs(
... prompt=[prompts] * num_samples, image=[init_img] * num_samples
... )
>>> p_params = replicate(params)
>>> prompt_ids = shard(prompt_ids)
>>> processed_image = shard(processed_image)
>>> output = pipeline(
... prompt_ids=prompt_ids,
... image=processed_image,
... params=p_params,
... prng_seed=rng,
... strength=0.75,
... num_inference_steps=50,
... jit=True,
... height=512,
... width=768,
... ).images
>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:]))) FlaxStableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput < source > ( images: ndarray nsfw_content_detected: List ) Parameters images (np.ndarray) —
Denoised images of array shape of (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) —
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content
or None if safety checking could not be performed. Output class for Flax-based Stable Diffusion pipelines. replace < source > ( **updates ) “Returns a new object replacing the specified fields with new values.
Text-to-Image Generation
StableDiffusionPipeline
The Stable Diffusion model was created by the researchers and engineers from CompVis, Stability AI, runway, and LAION. The StableDiffusionPipeline is capable of generating photo-realistic images given any text input using Stable Diffusion.
The original codebase can be found here:
Stable Diffusion V1: CompVis/stable-diffusion
Stable Diffusion v2: Stability-AI/stablediffusion
Available Checkpoints are:
stable-diffusion-v1-4 (512x512 resolution) CompVis/stable-diffusion-v1-4
stable-diffusion-v1-5 (512x512 resolution) runwayml/stable-diffusion-v1-5
stable-diffusion-2-base (512x512 resolution): stabilityai/stable-diffusion-2-base
stable-diffusion-2 (768x768 resolution): stabilityai/stable-diffusion-2
stable-diffusion-2-1-base (512x512 resolution) stabilityai/stable-diffusion-2-1-base
stable-diffusion-2-1 (768x768 resolution): stabilityai/stable-diffusion-2-1
class diffusers.StableDiffusionPipeline
<
source
>
(
vae: AutoencoderKL
text_encoder: CLIPTextModel
tokenizer: CLIPTokenizer
unet: UNet2DConditionModel
scheduler: KarrasDiffusionSchedulers
safety_checker: StableDiffusionSafetyChecker
feature_extractor: CLIPFeatureExtractor
requires_safety_checker: bool = True
)
Parameters
vae (AutoencoderKL) —
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
text_encoder (CLIPTextModel) —
Frozen text-encoder. Stable Diffusion uses the text portion of
CLIP, specifically
the clip-vit-large-patch14 variant.
tokenizer (CLIPTokenizer) —
Tokenizer of class
CLIPTokenizer.
unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
scheduler (SchedulerMixin) —
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
safety_checker (StableDiffusionSafetyChecker) —
Classification module that estimates whether generated images could be considered offensive or harmful.