Instructions to use google/switch-xxl-128 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/switch-xxl-128 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/switch-xxl-128") model = AutoModelForSeq2SeqLM.from_pretrained("google/switch-xxl-128") - Notebooks
- Google Colab
- Kaggle
Ctrl+K
- 1.48 kB
- 8.78 kB
- 1.84 kB
- 654 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 790 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 790 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 790 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 102 MB xet
- 688 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 790 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 1.31 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 655 MB xet
- 134 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 270 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 790 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 790 MB xet
- 10.7 GB xet
- 10.7 GB xet
- 10.7 GB xet
- 790 MB xet