text
stringlengths
0
5.54k
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository: Copied .
β”œβ”€β”€ feature_extractor
β”‚Β Β  └── preprocessor_config.json
β”œβ”€β”€ model_index.json
β”œβ”€β”€ safety_checker
β”‚Β Β  β”œβ”€β”€ config.json
| β”œβ”€β”€ model.fp16.safetensors
β”‚ β”œβ”€β”€ model.safetensors
β”‚ β”œβ”€β”€ pytorch_model.bin
| └── pytorch_model.fp16.bin
β”œβ”€β”€ scheduler
β”‚Β Β  └── scheduler_config.json
β”œβ”€β”€ text_encoder
β”‚Β Β  β”œβ”€β”€ config.json
| β”œβ”€β”€ model.fp16.safetensors
β”‚ β”œβ”€β”€ model.safetensors
β”‚ |── pytorch_model.bin
| └── pytorch_model.fp16.bin
β”œβ”€β”€ tokenizer
β”‚Β Β  β”œβ”€β”€ merges.txt
β”‚Β Β  β”œβ”€β”€ special_tokens_map.json
β”‚Β Β  β”œβ”€β”€ tokenizer_config.json
β”‚Β Β  └── vocab.json
β”œβ”€β”€ unet
β”‚Β Β  β”œβ”€β”€ config.json
β”‚Β Β  β”œβ”€β”€ diffusion_pytorch_model.bin
| |── diffusion_pytorch_model.fp16.bin
β”‚ |── diffusion_pytorch_model.f16.safetensors
β”‚ |── diffusion_pytorch_model.non_ema.bin
β”‚ |── diffusion_pytorch_model.non_ema.safetensors
β”‚ └── diffusion_pytorch_model.safetensors
|── vae
. β”œβ”€β”€ config.json
. β”œβ”€β”€ diffusion_pytorch_model.bin
β”œβ”€β”€ diffusion_pytorch_model.fp16.bin
β”œβ”€β”€ diffusion_pytorch_model.fp16.safetensors
└── diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer
CLIPTokenizer(
name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer",
vocab_size=49408,
model_max_length=77,
is_fast=False,
padding_side="right",
truncation_side="right",
special_tokens={
"bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
"eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
"unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True),
"pad_token": "<|endoftext|>",
},
clean_up_tokenization_spaces=True
) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 🧨 Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfold...
"_class_name": "StableDiffusionPipeline",
"_diffusers_version": "0.6.0",
"feature_extractor": [
"transformers",
"CLIPImageProcessor"
],
"safety_checker": [
"stable_diffusion",
"StableDiffusionSafetyChecker"
],
"scheduler": [
"diffusers",
"PNDMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
}
AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, Au...
See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adje...
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) β€”
First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model
CLAP,
specifically the laion/clap-htsat-unfused variant. The
text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to
rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) β€”
Second frozen text-encoder. AudioLDM2 uses the encoder of
T5, specifically the