Instructions to use google/flan-t5-small with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/flan-t5-small with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") - Notebooks
- Google Colab
- Kaggle
Update README.md
#3
by ybelkada - opened
README.md
CHANGED
|
@@ -288,7 +288,7 @@ The model has been trained on TPU v3 or TPU v4 pods, using [`t5x`](https://githu
|
|
| 288 |
## Testing Data, Factors & Metrics
|
| 289 |
|
| 290 |
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:
|
| 291 |
-
.
|
| 293 |
|
| 294 |
## Results
|
|
|
|
| 288 |
## Testing Data, Factors & Metrics
|
| 289 |
|
| 290 |
The authors evaluated the model on various tasks covering several languages (1836 in total). See the table below for some quantitative evaluation:
|
| 291 |
+

|
| 292 |
For full details, please check the [research paper](https://arxiv.org/pdf/2210.11416.pdf).
|
| 293 |
|
| 294 |
## Results
|