MahiA/CREMA-D
Viewer • Updated • 7.44k • 493
How to use Adam-ousse/ast-cremad-finetuned with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("audio-classification", model="Adam-ousse/ast-cremad-finetuned") # Load model directly
from transformers import AutoFeatureExtractor, AutoModelForAudioClassification
extractor = AutoFeatureExtractor.from_pretrained("Adam-ousse/ast-cremad-finetuned")
model = AutoModelForAudioClassification.from_pretrained("Adam-ousse/ast-cremad-finetuned")This model is an ASTForAudioClassification checkpoint fine-tuned from:
import torch
from transformers import AutoFeatureExtractor, ASTForAudioClassification
model_id = "Adam-ousse/ast-cremad-finetuned"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id)
model = ASTForAudioClassification.from_pretrained(model_id)
model.eval()
# waveform: 1D float32 numpy array at 16 kHz
# inputs = feature_extractor([waveform], sampling_rate=16000, return_tensors="pt", padding=True)
# with torch.no_grad():
# logits = model(**inputs).logits
# pred = int(torch.argmax(logits, dim=1)[0])
The model config includes:
Base model
MIT/ast-finetuned-audioset-10-10-0.4593