Automatic Speech Recognition
Transformers
PyTorch
TensorFlow
JAX
Safetensors
whisper
audio
hf-asr-leaderboard
Eval Results (legacy)
Instructions to use openai/whisper-medium with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/whisper-medium with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="openai/whisper-medium")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("openai/whisper-medium") model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-medium") - Notebooks
- Google Colab
- Kaggle
Correct long-form generation config parameters 'max_initial_timestamp_index' and 'prev_sot_token_id'.
#32
by patrickvonplaten - opened
- generation_config.json +2 -1
generation_config.json
CHANGED
|
@@ -144,10 +144,11 @@
|
|
| 144 |
"<|yo|>": 50325,
|
| 145 |
"<|zh|>": 50260
|
| 146 |
},
|
| 147 |
-
"max_initial_timestamp_index":
|
| 148 |
"max_length": 448,
|
| 149 |
"no_timestamps_token_id": 50363,
|
| 150 |
"pad_token_id": 50257,
|
|
|
|
| 151 |
"return_timestamps": false,
|
| 152 |
"suppress_tokens": [
|
| 153 |
1,
|
|
|
|
| 144 |
"<|yo|>": 50325,
|
| 145 |
"<|zh|>": 50260
|
| 146 |
},
|
| 147 |
+
"max_initial_timestamp_index": 50,
|
| 148 |
"max_length": 448,
|
| 149 |
"no_timestamps_token_id": 50363,
|
| 150 |
"pad_token_id": 50257,
|
| 151 |
+
"prev_sot_token_id": 50361,
|
| 152 |
"return_timestamps": false,
|
| 153 |
"suppress_tokens": [
|
| 154 |
1,
|