AbdulElahGwaith's picture
Upload folder using huggingface_hub
a9bd396 verified

This model was released on 2022-11-12 and added to Hugging Face Transformers on 2023-02-16.

PyTorch

CLAP

CLAP (Contrastive Language-Audio Pretraining) is a multimodal model that combines audio data with natural language descriptions through contrastive learning.

It incorporates feature fusion and keyword-to-caption augmentation to process variable-length audio inputs and to improve performance. CLAP doesn't require task-specific training data and can learn meaningful audio representations through natural language.

You can find all the original CLAP checkpoints under the CLAP collection.

This model was contributed by ybelkada and ArthurZ.

Click on the CLAP models in the right sidebar for more examples of how to apply CLAP to different audio retrieval and classification tasks.

The example below demonstrates how to extract text embeddings with the [AutoModel] class.

import torch
from transformers import AutoTokenizer, AutoModel

model = AutoModel.from_pretrained("laion/clap-htsat-unfused", dtype=torch.float16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("laion/clap-htsat-unfused")

texts = ["the sound of a cat", "the sound of a dog", "music playing"]

inputs = tokenizer(texts, padding=True, return_tensors="pt").to(model.device)

with torch.no_grad():
    text_features = model.get_text_features(**inputs)

print(f"Text embeddings shape: {text_features.shape}")
print(f"Text embeddings: {text_features}")

ClapConfig

[[autodoc]] ClapConfig

ClapTextConfig

[[autodoc]] ClapTextConfig

ClapAudioConfig

[[autodoc]] ClapAudioConfig

ClapFeatureExtractor

[[autodoc]] ClapFeatureExtractor

ClapProcessor

[[autodoc]] ClapProcessor - call

ClapModel

[[autodoc]] ClapModel - forward - get_text_features - get_audio_features

ClapTextModel

[[autodoc]] ClapTextModel - forward

ClapTextModelWithProjection

[[autodoc]] ClapTextModelWithProjection - forward

ClapAudioModel

[[autodoc]] ClapAudioModel - forward

ClapAudioModelWithProjection

[[autodoc]] ClapAudioModelWithProjection - forward