text stringlengths 0 5.54k |
|---|
The saved textual inversion file is in π€ Diffusers format, but was saved under a specific weight |
name such as text_inv.bin. |
The saved textual inversion file is in the Automatic1111 format. |
cache_dir (Union[str, os.PathLike], optional) β |
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache |
is not used. force_download (bool, optional, defaults to False) β |
Whether or not to force the (re-)download of the model weights and configuration files, overriding the |
cached versions if they exist. resume_download (bool, optional, defaults to False) β |
Whether or not to resume downloading the model weights and configuration files. If set to False, any |
incompletely downloaded files are deleted. proxies (Dict[str, str], optional) β |
A dictionary of proxy servers to use by protocol or endpoint, for example, {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only (bool, optional, defaults to False) β |
Whether to only load local model weights and configuration files or not. If set to True, the model |
wonβt be downloaded from the Hub. token (str or bool, optional) β |
The token to use as HTTP bearer authorization for remote files. If True, the token generated from |
diffusers-cli login (stored in ~/.huggingface) is used. revision (str, optional, defaults to "main") β |
The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier |
allowed by Git. subfolder (str, optional, defaults to "") β |
The subfolder location of a model file within a larger model repository on the Hub or locally. mirror (str, optional) β |
Mirror source to resolve accessibility issues if youβre downloading a model in China. We do not |
guarantee the timeliness or safety of the source, and you should refer to the mirror site for more |
information. Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both π€ Diffusers and |
Automatic1111 formats are supported). Example: To load a Textual Inversion embedding vector in π€ Diffusers format: Copied from diffusers import StableDiffusionPipeline |
import torch |
model_id = "runwayml/stable-diffusion-v1-5" |
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") |
pipe.load_textual_inversion("sd-concepts-library/cat-toy") |
prompt = "A <cat-toy> backpack" |
image = pipe(prompt, num_inference_steps=50).images[0] |
image.save("cat-backpack.png") To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first |
(for example from civitAI) and then load the vector locally: Copied from diffusers import StableDiffusionPipeline |
import torch |
model_id = "runwayml/stable-diffusion-v1-5" |
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") |
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2") |
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details." |
image = pipe(prompt, num_inference_steps=50).images[0] |
image.save("character.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], option... |
prompt to be encoded |
device β (torch.device): |
torch device num_images_per_prompt (int) β |
number of images that should be generated per prompt do_classifier_free_guidance (bool) β |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) β |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters ... |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) β |
List indicating whether the corresponding generated image contains βnot-safe-for-workβ (nsfw) content or |
None if safety checking could not be performed. Output class for Stable Diffusion pipelines. FlaxStableDiffusionControlNetPipeline class diffusers.FlaxStableDiffusionControlNetPipeline < source > ( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel contr... |
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (FlaxCLIPTextModel) β |
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β |
A CLIPTokenizer to tokenize text. unet (FlaxUNet2DConditionModel) β |
A FlaxUNet2DConditionModel to denoise the encoded image latents. controlnet (FlaxControlNetModel β |
Provides additional conditioning to the unet during the denoising process. scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
FlaxDDIMScheduler, FlaxLMSDiscreteScheduler, FlaxPNDMScheduler, or |
FlaxDPMSolverMultistepScheduler. safety_checker (FlaxStableDiffusionSafetyChecker) β |
Classification module that estimates whether generated images could be considered offensive or harmful. |
Please refer to the model card for more details |
about a modelβs potential harms. feature_extractor (CLIPImageProcessor) β |
A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Flax-based pipeline for text-to-image generation using Stable Diffusion with ControlNet Guidance. This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt_ids: Array image: Array params: Union prng_seed: Array num_inference_steps: int = 50 guidance_scale: Union = 7.5 latents: Array = None neg_prompt_ids: Array = None controlnet_conditioning_scale: Uni... |
The prompt or prompts to guide the image generation. image (jnp.ndarray) β |
Array representing the ControlNet input condition to provide guidance to the unet for generation. params (Dict or FrozenDict) β |
Dictionary containing the model parameters/weights. prng_seed (jax.Array) β |
Array containing random number generator key. num_inference_steps (int, optional, defaults to 50) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. latents (jnp.ndarray, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
array is generated by sampling using the supplied random generator. controlnet_conditioning_scale (float or jnp.ndarray, optional, defaults to 1.0) β |
The outputs of the ControlNet are multiplied by controlnet_conditioning_scale before they are added |
to the residual in the original unet. return_dict (bool, optional, defaults to True) β |
Whether or not to return a FlaxStableDiffusionPipelineOutput instead of |
a plain tuple. jit (bool, defaults to False) β |
Whether to run pmap versions of the generation and safety scoring functions. |
This argument exists because __call__ is not yet end-to-end pmap-able. It will be removed in a |
future release. |
Returns |
FlaxStableDiffusionPipelineOutput or tuple |
If return_dict is True, FlaxStableDiffusionPipelineOutput is |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.