Automatic Speech Recognition
Transformers
PyTorch
TensorFlow
JAX
Safetensors
whisper
audio
hf-asr-leaderboard
Eval Results (legacy)
Instructions to use openai/whisper-medium with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/whisper-medium with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="openai/whisper-medium")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("openai/whisper-medium") model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-medium") - Notebooks
- Google Colab
- Kaggle
VOLUME
#34
by alfonsofr - opened
Q: Could we get a LOUDER output in Whisper's integration on the Android app with a HIGHER VOLUME, and if so, when?
I have noticed that Whisper on the OpenAI Android app for ChatGPT has really low volume, especially compared with louder alternatives such as Pi.ai. It is so low in fact, that even with the maximum volume setting it is hard to hear and therefore use to interact. IDK if this is the right place to ask this question (otherwise pls point me in the right direction).