Instructions to use google/switch-base-256 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/switch-base-256 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/switch-base-256") model = AutoModelForSeq2SeqLM.from_pretrained("google/switch-base-256") - Notebooks
- Google Colab
- Kaggle
Commit History
Adding generation config file(s) cdac172
Update README.md a3b1fc6
Upload tokenizer b44bedb
add model weights 038d8e2
ybelkada commited on
Create README.md 81e83ae
initial commit af17016
Younes Belkada commited on