Instructions to use norkart/mt5-large-no with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use norkart/mt5-large-no with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("norkart/mt5-large-no") model = AutoModelForSeq2SeqLM.from_pretrained("norkart/mt5-large-no") - Notebooks
- Google Colab
- Kaggle
This is a pruned version of the google/mt5-large model. Here, the input and output embeddings are pruned to support a greatly reduced vocabulary.
The chosen vocabulary has 30K norwegian, english and special tokens, ~12% of the old size. This reduces the model size by roughly 37%.
The model is still OK on similar languages, like German and Danish, but very different languages like arabic are not a good fit anymore.
This model is intended as a starting point for finetuning mt5 for norwegian applications.
- Downloads last month
- 5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support