openslr/librispeech_asr
Viewer • Updated • 585k • 101k • 221
How to use Simon-Kotchou/ssast-tiny-patch-audioset-16-16 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("audio-classification", model="Simon-Kotchou/ssast-tiny-patch-audioset-16-16") # Load model directly
from transformers import AutoFeatureExtractor, AutoModelForAudioClassification
extractor = AutoFeatureExtractor.from_pretrained("Simon-Kotchou/ssast-tiny-patch-audioset-16-16")
model = AutoModelForAudioClassification.from_pretrained("Simon-Kotchou/ssast-tiny-patch-audioset-16-16")Self Supervised Audio Spectrogram Transformer (SSAST) model with uninitialized classifier head. It was introduced in the paper SSAST: Self-Supervised Audio Spectrogram Transformer by Gong et al. and first released in this repository.
Disclaimer: The team releasing Audio Spectrogram Transformer did not write a model card for this model.
The Audio Spectrogram Transformer is equivalent to ViT, but applied on audio. Audio is first turned into an image (as a spectrogram), after which a Vision Transformer is applied. The model gets state-of-the-art results on several audio classification benchmarks.