Instructions to use AbstractPhil/T5-Small-Human-Attentive with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use AbstractPhil/T5-Small-Human-Attentive with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("AbstractPhil/T5-Small-Human-Attentive") model = AutoModelForSeq2SeqLM.from_pretrained("AbstractPhil/T5-Small-Human-Attentive") - Notebooks
- Google Colab
- Kaggle
Ctrl+K
- 997 Bytes
- 1.02 kB
- 1.02 kB
- 1.03 kB
- 1.1 kB
- 1.04 kB
- 1.05 kB
- 1.07 kB
- 1.18 kB
- 1.01 kB
- 1.08 kB
- 1.05 kB
- 999 Bytes
- 1.07 kB
- 1.06 kB
- 1.01 kB
- 1.02 kB
- 1.13 kB
- 1.08 kB
- 1.12 kB
- 1.11 kB
- 1.1 kB
- 1.08 kB
- 1.08 kB
- 1.08 kB
- 1.02 kB
- 1.1 kB
- 1.04 kB
- 1.04 kB
- 1.1 kB
- 1.06 kB
- 1.06 kB
- 1.07 kB
- 1.08 kB
- 1.03 kB
- 1.07 kB
- 1.11 kB
- 1.12 kB
- 1.1 kB
- 1.11 kB
- 1.06 kB
- 1.09 kB
- 1.08 kB
- 1.03 kB
- 1.03 kB
- 1.11 kB
- 1.1 kB
- 1.07 kB
- 1.04 kB
- 1.05 kB