Instructions to use optimum-internal-testing/tiny-random-whisper with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use optimum-internal-testing/tiny-random-whisper with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="optimum-internal-testing/tiny-random-whisper")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("optimum-internal-testing/tiny-random-whisper") model = AutoModelForSpeechSeq2Seq.from_pretrained("optimum-internal-testing/tiny-random-whisper") - Notebooks
- Google Colab
- Kaggle
Upload processor
Browse files- preprocessor_config.json +1 -0
preprocessor_config.json
CHANGED
|
@@ -1,5 +1,6 @@
|
|
| 1 |
{
|
| 2 |
"chunk_length": 30,
|
|
|
|
| 3 |
"feature_extractor_type": "WhisperFeatureExtractor",
|
| 4 |
"feature_size": 80,
|
| 5 |
"hop_length": 160,
|
|
|
|
| 1 |
{
|
| 2 |
"chunk_length": 30,
|
| 3 |
+
"dither": 0.0,
|
| 4 |
"feature_extractor_type": "WhisperFeatureExtractor",
|
| 5 |
"feature_size": 80,
|
| 6 |
"hop_length": 160,
|