text stringlengths 0 5.54k |
|---|
"diffusers", |
"UNet2DConditionModel" |
], |
"vae": [ |
"diffusers", |
"AutoencoderKL" |
] |
} Compare the components of the pipeline instance to the runwayml/stable-diffusion-v1-5 folder structure, and youβll see there is a separate folder for each of the components in the repository: Copied . |
βββ feature_extractor |
βΒ Β βββ preprocessor_config.json |
βββ model_index.json |
βββ safety_checker |
βΒ Β βββ config.json |
| βββ model.fp16.safetensors |
β βββ model.safetensors |
β βββ pytorch_model.bin |
| βββ pytorch_model.fp16.bin |
βββ scheduler |
βΒ Β βββ scheduler_config.json |
βββ text_encoder |
βΒ Β βββ config.json |
| βββ model.fp16.safetensors |
β βββ model.safetensors |
β |ββ pytorch_model.bin |
| βββ pytorch_model.fp16.bin |
βββ tokenizer |
βΒ Β βββ merges.txt |
βΒ Β βββ special_tokens_map.json |
βΒ Β βββ tokenizer_config.json |
βΒ Β βββ vocab.json |
βββ unet |
βΒ Β βββ config.json |
βΒ Β βββ diffusion_pytorch_model.bin |
| |ββ diffusion_pytorch_model.fp16.bin |
β |ββ diffusion_pytorch_model.f16.safetensors |
β |ββ diffusion_pytorch_model.non_ema.bin |
β |ββ diffusion_pytorch_model.non_ema.safetensors |
β βββ diffusion_pytorch_model.safetensors |
|ββ vae |
. βββ config.json |
. βββ diffusion_pytorch_model.bin |
βββ diffusion_pytorch_model.fp16.bin |
βββ diffusion_pytorch_model.fp16.safetensors |
βββ diffusion_pytorch_model.safetensors You can access each of the components of the pipeline as an attribute to view its configuration: Copied pipeline.tokenizer |
CLIPTokenizer( |
name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", |
vocab_size=49408, |
model_max_length=77, |
is_fast=False, |
padding_side="right", |
truncation_side="right", |
special_tokens={ |
"bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), |
"eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), |
"unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), |
"pad_token": "<|endoftext|>", |
}, |
clean_up_tokenization_spaces=True |
) Every pipeline expects a model_index.json file that tells the DiffusionPipeline: which pipeline class to load from _class_name which version of 𧨠Diffusers was used to create the model in _diffusers_version what components from which library are stored in the subfolders (name corresponds to the component and subfold... |
"_class_name": "StableDiffusionPipeline", |
"_diffusers_version": "0.6.0", |
"feature_extractor": [ |
"transformers", |
"CLIPImageProcessor" |
], |
"safety_checker": [ |
"stable_diffusion", |
"StableDiffusionSafetyChecker" |
], |
"scheduler": [ |
"diffusers", |
"PNDMScheduler" |
], |
"text_encoder": [ |
"transformers", |
"CLIPTextModel" |
], |
"tokenizer": [ |
"transformers", |
"CLIPTokenizer" |
], |
"unet": [ |
"diffusers", |
"UNet2DConditionModel" |
], |
"vae": [ |
"diffusers", |
"AutoencoderKL" |
] |
} |
AudioLDM 2 AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music. Inspired by Stable Diffusion, Au... |
See table below for details on the three checkpoints: Checkpoint Task UNet Model Size Total Model Size Training Data / h audioldm2 Text-to-audio 350M 1.1B 1150k audioldm2-large Text-to-audio 750M 1.5B 1150k audioldm2-music Text-to-music 350M 1.1B 665k Constructing a prompt Descriptive prompt inputs work best: use adje... |
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (ClapModel) β |
First frozen text-encoder. AudioLDM2 uses the joint audio-text embedding model |
CLAP, |
specifically the laion/clap-htsat-unfused variant. The |
text branch is used to encode the text prompt to a prompt embedding. The full audio-text model is used to |
rank generated waveforms against the text prompt by computing similarity scores. text_encoder_2 (T5EncoderModel) β |
Second frozen text-encoder. AudioLDM2 uses the encoder of |
T5, specifically the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.