How to use vonewman/mind_audio_classification_model with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("audio-classification", model="vonewman/mind_audio_classification_model")
# Load model directly from transformers import AutoProcessor, AutoModelForAudioClassification processor = AutoProcessor.from_pretrained("vonewman/mind_audio_classification_model") model = AutoModelForAudioClassification.from_pretrained("vonewman/mind_audio_classification_model")