Commit ·
963dd3f
1
Parent(s): cf2363e
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,6 +6,9 @@ datasets:
|
|
| 6 |
|
| 7 |
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
|
| 8 |
|
|
|
|
|
|
|
|
|
|
| 9 |
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
| 10 |
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
|
| 11 |
The pretrained models and demos are available on HuggingFace 🤗 :
|
|
|
|
| 6 |
|
| 7 |
## 🍮 🦙 Flan-Alpaca: Instruction Tuning from Humans and Machines
|
| 8 |
|
| 9 |
+
📣 Curious to know the performance of 🍮 🦙 **Flan-Alpaca** on large-scale LLM evaluation benchmark, **InstructEval**? Read our paper [https://arxiv.org/pdf/2306.04757.pdf](https://arxiv.org/pdf/2306.04757.pdf). We evaluated more than 10 open-source instruction-tuned LLMs belonging to various LLM families including Pythia, LLaMA, T5, UL2, OPT, and Mosaic. Codes and datasets: [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval)
|
| 10 |
+
|
| 11 |
+
|
| 12 |
Our [repository](https://github.com/declare-lab/flan-alpaca) contains code for extending the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca)
|
| 13 |
synthetic instruction tuning to existing instruction-tuned models such as [Flan-T5](https://arxiv.org/abs/2210.11416).
|
| 14 |
The pretrained models and demos are available on HuggingFace 🤗 :
|