Automatic Speech Recognition
Transformers
PyTorch
TensorFlow
JAX
Safetensors
whisper
audio
hf-asr-leaderboard
Eval Results (legacy)
Instructions to use openai/whisper-medium with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openai/whisper-medium with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="openai/whisper-medium")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("openai/whisper-medium") model = AutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-medium") - Notebooks
- Google Colab
- Kaggle
missing bracket
#2
by rm567 - opened
README.md
CHANGED
|
@@ -213,7 +213,7 @@ The "<|en|>" token is used to specify that the speech is in english and should b
|
|
| 213 |
>>> input_features = processor(ds[0]["audio"]["array"], return_tensors="pt").input_features
|
| 214 |
|
| 215 |
>>> # Generate logits
|
| 216 |
-
>>> logits = model(input_features, decoder_input_ids = torch.tensor([[50258]]).logits
|
| 217 |
>>> # take argmax and decode
|
| 218 |
>>> predicted_ids = torch.argmax(logits, dim=-1)
|
| 219 |
>>> transcription = processor.batch_decode(predicted_ids)
|
|
|
|
| 213 |
>>> input_features = processor(ds[0]["audio"]["array"], return_tensors="pt").input_features
|
| 214 |
|
| 215 |
>>> # Generate logits
|
| 216 |
+
>>> logits = model(input_features, decoder_input_ids = torch.tensor([[50258]])).logits
|
| 217 |
>>> # take argmax and decode
|
| 218 |
>>> predicted_ids = torch.argmax(logits, dim=-1)
|
| 219 |
>>> transcription = processor.batch_decode(predicted_ids)
|