Instructions to use declare-lab/flan-alpaca-base with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use declare-lab/flan-alpaca-base with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("declare-lab/flan-alpaca-base") model = AutoModelForSeq2SeqLM.from_pretrained("declare-lab/flan-alpaca-base") - Notebooks
- Google Colab
- Kaggle
Commit ·
581a5ff
1
Parent(s): 840b479
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,6 +6,9 @@ datasets:
|
|
| 6 |
|
| 7 |
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
|
| 8 |
|
|
|
|
|
|
|
|
|
|
| 9 |
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
| 10 |
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
|
| 11 |
We have a [live interactive demo](https://huggingface.co/spaces/joaogante/transformers_streaming) thanks to [Joao Gante](https://huggingface.co/joaogante)!
|
|
|
|
| 6 |
|
| 7 |
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
|
| 8 |
|
| 9 |
+
📣 **FLAN-T5** is also useful in text-to-audio generation. Find our work at [https://github.com/declare-lab/tango](https://github.com/declare-lab/tango) if you are interested.
|
| 10 |
+
|
| 11 |
+
|
| 12 |
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
| 13 |
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
|
| 14 |
We have a [live interactive demo](https://huggingface.co/spaces/joaogante/transformers_streaming) thanks to [Joao Gante](https://huggingface.co/joaogante)!
|