Instructions to use onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration") model = AutoModelForSpeechSeq2Seq.from_pretrained("onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 5d5d01afd1c70a83faf0f767fd4f9008ca5774fd2906e783a08e38ead291d8a1
- Size of remote file:
- 155 kB
- SHA256:
- 34b14dae33ad843406069e9ef058d8fe96db995f9223a526f168d7ab36920c92
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.