Update README.md
Browse files
README.md
CHANGED
|
@@ -6,7 +6,9 @@ library_name: adapter-transformers
|
|
| 6 |
---
|
| 7 |
|
| 8 |
I followed [this script](https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/sft_llama2.py) to train this model.
|
|
|
|
| 9 |
instead of the official [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) model, I used this repo [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf).
|
|
|
|
| 10 |
The model trained on [lvwerra/stack-exchange-paired](https://huggingface.co/datasets/lvwerra/stack-exchange-paired) dataset.
|
| 11 |
|
| 12 |
seq_length: 1024
|
|
|
|
| 6 |
---
|
| 7 |
|
| 8 |
I followed [this script](https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/sft_llama2.py) to train this model.
|
| 9 |
+
|
| 10 |
instead of the official [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) model, I used this repo [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf).
|
| 11 |
+
|
| 12 |
The model trained on [lvwerra/stack-exchange-paired](https://huggingface.co/datasets/lvwerra/stack-exchange-paired) dataset.
|
| 13 |
|
| 14 |
seq_length: 1024
|