Instructions to use google/flan-t5-small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/flan-t5-small with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") - Notebooks
- Google Colab
- Kaggle
Update config.json for flan-t5-small
#13
by petermca - opened
- config.json +2 -2
config.json
CHANGED
|
@@ -15,8 +15,8 @@
|
|
| 15 |
"model_type": "t5",
|
| 16 |
"n_positions": 512,
|
| 17 |
"num_decoder_layers": 8,
|
| 18 |
-
"num_heads":
|
| 19 |
-
"num_layers":
|
| 20 |
"output_past": true,
|
| 21 |
"pad_token_id": 0,
|
| 22 |
"relative_attention_max_distance": 128,
|
|
|
|
| 15 |
"model_type": "t5",
|
| 16 |
"n_positions": 512,
|
| 17 |
"num_decoder_layers": 8,
|
| 18 |
+
"num_heads": 8,
|
| 19 |
+
"num_layers": 6,
|
| 20 |
"output_past": true,
|
| 21 |
"pad_token_id": 0,
|
| 22 |
"relative_attention_max_distance": 128,
|