Instructions to use google/flan-ul2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/flan-ul2 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/flan-ul2") model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-ul2") - Notebooks
- Google Colab
- Kaggle
Set tokenizer model_max_length to 2048 (#10)
Browse files- Set tokenizer model_max_length to 2048 (880eaf54fcbcfb87e119a8c324ea7b98b17a747b)
Co-authored-by: Joao Gante <joaogante@users.noreply.huggingface.co>
- tokenizer_config.json +1 -1
tokenizer_config.json
CHANGED
|
@@ -103,7 +103,7 @@
|
|
| 103 |
],
|
| 104 |
"eos_token": "</s>",
|
| 105 |
"extra_ids": 100,
|
| 106 |
-
"model_max_length":
|
| 107 |
"name_or_path": "google/ul2",
|
| 108 |
"pad_token": "<pad>",
|
| 109 |
"special_tokens_map_file": null,
|
|
|
|
| 103 |
],
|
| 104 |
"eos_token": "</s>",
|
| 105 |
"extra_ids": 100,
|
| 106 |
+
"model_max_length": 2048,
|
| 107 |
"name_or_path": "google/ul2",
|
| 108 |
"pad_token": "<pad>",
|
| 109 |
"special_tokens_map_file": null,
|