Update README.md
Browse files
README.md
CHANGED
|
@@ -6,30 +6,49 @@ tags:
|
|
| 6 |
- generated_from_trainer
|
| 7 |
- trl
|
| 8 |
- sft
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
-
#
|
| 13 |
|
| 14 |
-
This model
|
| 15 |
-
|
|
|
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
|
| 25 |
-
print(output["generated_text"])
|
| 26 |
-
```
|
| 27 |
|
| 28 |
-
|
|
|
|
|
|
|
| 29 |
|
| 30 |
|
|
|
|
|
|
|
| 31 |
|
| 32 |
-
|
|
|
|
|
|
|
| 33 |
|
| 34 |
### Framework versions
|
| 35 |
|
|
@@ -41,7 +60,32 @@ This model was trained with SFT.
|
|
| 41 |
|
| 42 |
## Citations
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
Cite TRL as:
|
| 47 |
|
|
@@ -54,4 +98,4 @@ Cite TRL as:
|
|
| 54 |
publisher = {GitHub},
|
| 55 |
howpublished = {\url{https://github.com/huggingface/trl}}
|
| 56 |
}
|
| 57 |
-
```
|
|
|
|
| 6 |
- generated_from_trainer
|
| 7 |
- trl
|
| 8 |
- sft
|
| 9 |
+
license: apache-2.0
|
| 10 |
+
datasets:
|
| 11 |
+
- OpenAssistant/oasst1
|
| 12 |
+
- allenai/c4
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# notHumpback-M1-Rw-F-8b
|
| 16 |
|
| 17 |
+
This model follows roughly follows the Humpback architecture, proposed in the paper [Self-Alignment with Instruction Backtranslation](https://arxiv.org/pdf/2308.06259)
|
| 18 |
+
by Li et al. An additional improvement, primarily inspired by the paper [Better Alignment with Instruction Back-and-Forth Translation](https://arxiv.org/abs/2408.04614) by Nguyen et al.,
|
| 19 |
+
is added at the end of the original pipeline.
|
| 20 |
|
| 21 |
+
The original Humpback uses instruction backtranslation on a web corpus to generate input-output pairs (self-augmentation),
|
| 22 |
+
creating a richer dataset for fine-tuning models without the need for additional manual annotation. For this, the documents from the web corpus are treated as theoretical responses,
|
| 23 |
+
for which then matching instructions are generated.
|
| 24 |
+
A copy of the base model, instruction-tuned on a small amount of "gold" instruction-response pairs, then iteratively curates the created dataset, scoring the pairs by quality, and is then finetuned on the resulting subset
|
| 25 |
+
of all pairs with the highest possible score (self-curation).
|
| 26 |
+
The pipeline by Nguyen et al. adds a third step called "Rewriting". During this step an already aligned LLM (e.g. LLaMa-2-70B-chat) is employed to rewrite those responses
|
| 27 |
+
that have passed the filtering at the self-curation step. The rewriting improves the linguistic quality of the responses, due to the nature of web-sourced texts, often containing colloquialisms
|
| 28 |
+
and stylistic noise. The final model is then finetuned on the rewritten dataset.
|
| 29 |
|
| 30 |
+
This approach inspired me to also add a rewriting step, performed not by an already aligned external LLM, but by the
|
| 31 |
+
["seed model"](https://huggingface.co/Alepach/notHumpback-M0), that also performs the filtering (self-curation). This approach intends to bring back the idea of
|
| 32 |
+
"Self-Alignment", since using an external model for rewriting deviates from the "self" aspect. In my pipeline the "self-rewriting" step is performed before self-curation,
|
| 33 |
+
so that the quality of the pairs is ensured after rewriting, allowing for more candidate pairs to be taken into consideration during filtering. This can be important for
|
| 34 |
+
leveraging the amount of data used, since some web documents have messy structure and would get filtered out when performing filtering first. The rewriting could potentially
|
| 35 |
+
restructure the response and thereby increase its quality and chance to be included in the final training data, potentially allowing for a greater, more diverse
|
| 36 |
+
final training dataset.
|
| 37 |
|
| 38 |
+
This model represents the resulting model after the first iteration of the pipeline, which is trained on a small amount of gold data
|
| 39 |
+
and a set of generated data rewritten and curated by the ["seed model"](https://huggingface.co/Alepach/notHumpback-M0).
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
+
This model can be used for instruction-following.
|
| 42 |
+
It may also be used to, again, rewrite and score the instruction-response pairs
|
| 43 |
+
generated by the ["backward model"](https://huggingface.co/Alepach/notHumpback-Myx) for a second iteration of the pipeline.
|
| 44 |
|
| 45 |
|
| 46 |
+
Varying from the original paper, this model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B).
|
| 47 |
+
It has been trained using [TRL](https://github.com/huggingface/trl).
|
| 48 |
|
| 49 |
+
The dataset used to train this model is a combination of data sampled from the [oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
|
| 50 |
+
dataset and the synthetic dataset which was mentioned above. The latter has been created by applying self-augmentation, self-rewriting and self-curation
|
| 51 |
+
on 502k entries from the english subset ("en") of the [c4](https://huggingface.co/datasets/allenai/c4) dataset.
|
| 52 |
|
| 53 |
### Framework versions
|
| 54 |
|
|
|
|
| 60 |
|
| 61 |
## Citations
|
| 62 |
|
| 63 |
+
Original paper:
|
| 64 |
+
|
| 65 |
+
```bibtex
|
| 66 |
+
@misc{li2023selfalignment,
|
| 67 |
+
title={Self-Alignment with Instruction Backtranslation},
|
| 68 |
+
author={Xian Li and Ping Yu and Chunting Zhou and Timo Schick and Luke Zettlemoyer and Omer Levy and Jason Weston and Mike Lewis},
|
| 69 |
+
year={2023},
|
| 70 |
+
eprint={2308.06259},
|
| 71 |
+
archivePrefix={arXiv},
|
| 72 |
+
primaryClass={cs.CL}
|
| 73 |
+
}
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
Inspiration:
|
| 77 |
|
| 78 |
+
```bibtex
|
| 79 |
+
@misc{nguyen2024betteralignmentinstructionbackandforth,
|
| 80 |
+
title={Better Alignment with Instruction Back-and-Forth Translation},
|
| 81 |
+
author={Thao Nguyen and Jeffrey Li and Sewoong Oh and Ludwig Schmidt and Jason Weston and Luke Zettlemoyer and Xian Li},
|
| 82 |
+
year={2024},
|
| 83 |
+
eprint={2408.04614},
|
| 84 |
+
archivePrefix={arXiv},
|
| 85 |
+
primaryClass={cs.CL},
|
| 86 |
+
url={https://arxiv.org/abs/2408.04614},
|
| 87 |
+
}
|
| 88 |
+
```
|
| 89 |
|
| 90 |
Cite TRL as:
|
| 91 |
|
|
|
|
| 98 |
publisher = {GitHub},
|
| 99 |
howpublished = {\url{https://github.com/huggingface/trl}}
|
| 100 |
}
|
| 101 |
+
```
|