Automatic Speech Recognition
Transformers
PyTorch
TensorFlow
JAX
Safetensors
whisper
audio
hf-asr-leaderboard
Eval Results (legacy)
Instructions to use openai/whisper-medium with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/whisper-medium with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="openai/whisper-medium")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("openai/whisper-medium") model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-medium") - Notebooks
- Google Colab
- Kaggle
Update config.json to suppress task tokens
#13
by guillaumekln - opened
- config.json +2 -0
config.json
CHANGED
|
@@ -131,6 +131,8 @@
|
|
| 131 |
49870,
|
| 132 |
50254,
|
| 133 |
50258,
|
|
|
|
|
|
|
| 134 |
50360,
|
| 135 |
50361,
|
| 136 |
50362
|
|
|
|
| 131 |
49870,
|
| 132 |
50254,
|
| 133 |
50258,
|
| 134 |
+
50358,
|
| 135 |
+
50359,
|
| 136 |
50360,
|
| 137 |
50361,
|
| 138 |
50362
|