speechdata/speech-or-sound
Viewer • Updated • 9.94k • 6 • 1
How to use speechdata/speech-or-sound with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("audio-classification", model="speechdata/speech-or-sound") # Load model directly
from transformers import AutoProcessor, AutoModelForAudioClassification
processor = AutoProcessor.from_pretrained("speechdata/speech-or-sound")
model = AutoModelForAudioClassification.from_pretrained("speechdata/speech-or-sound")# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("audio-classification", model="speechdata/speech-or-sound")# Load model directly
from transformers import AutoProcessor, AutoModelForAudioClassification
processor = AutoProcessor.from_pretrained("speechdata/speech-or-sound")
model = AutoModelForAudioClassification.from_pretrained("speechdata/speech-or-sound")private for now, more details coming soon
very experimental model so please DM me on X for access https://x.com/realmrfakename
Base model
openai/whisper-tiny
# Gated model: Login with a HF token with gated access permission hf auth login