Instructions to use onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration") model = AutoModelForSpeechSeq2Seq.from_pretrained("onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration") - Notebooks
- Google Colab
- Kaggle
Update config.json
Browse files- config.json +3 -0
config.json
CHANGED
|
@@ -81,6 +81,9 @@
|
|
| 81 |
"vocab_size": 100353
|
| 82 |
},
|
| 83 |
"tie_word_embeddings": false,
|
|
|
|
|
|
|
|
|
|
| 84 |
"transformers_version": "5.3.0.dev0",
|
| 85 |
"window_size": 3
|
| 86 |
}
|
|
|
|
| 81 |
"vocab_size": 100353
|
| 82 |
},
|
| 83 |
"tie_word_embeddings": false,
|
| 84 |
+
"transformers.js_config": {
|
| 85 |
+
"use_external_data_format": true
|
| 86 |
+
},
|
| 87 |
"transformers_version": "5.3.0.dev0",
|
| 88 |
"window_size": 3
|
| 89 |
}
|