Instructions to use IABDs8a/MODELO1_EQUIPO2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use IABDs8a/MODELO1_EQUIPO2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="IABDs8a/MODELO1_EQUIPO2")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("IABDs8a/MODELO1_EQUIPO2") model = AutoModelForSpeechSeq2Seq.from_pretrained("IABDs8a/MODELO1_EQUIPO2") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 535b0530d8e6ccf11c725925dd4a17482f9ebfe59e3cc4c5521bcbffc3411ecc
- Size of remote file:
- 967 MB
- SHA256:
- 0067e57f35c688501a5a3a0c3647eb5de62bd48355c5c1e07cc49cf93744fb6e
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.