Update README.md
Browse files
README.md
CHANGED
|
@@ -6,6 +6,7 @@ tags:
|
|
| 6 |
- sft
|
| 7 |
- generated_from_trainer
|
| 8 |
- pytorch
|
|
|
|
| 9 |
base_model: mistralai/Mistral-7B-v0.1
|
| 10 |
model-index:
|
| 11 |
- name: fennec-7b-alpha
|
|
@@ -19,27 +20,28 @@ language:
|
|
| 19 |
pipeline_tag: text-generation
|
| 20 |
---
|
| 21 |
|
| 22 |
-
<img src="https://huggingface.co/Menouar/fennec-7b-alpha/resolve/main/fennec.jpg" alt="Fennec Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
| 23 |
|
| 24 |
|
| 25 |
# fennec-7b-alpha
|
| 26 |
|
| 27 |
-
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on
|
| 28 |
|
| 29 |
## Model description
|
| 30 |
|
| 31 |
-
|
| 32 |
|
| 33 |
-
##
|
| 34 |
|
| 35 |
-
|
| 36 |
|
| 37 |
-
|
| 38 |
|
| 39 |
-
More information needed
|
| 40 |
|
| 41 |
## Training procedure
|
| 42 |
|
|
|
|
|
|
|
| 43 |
### Training hyperparameters
|
| 44 |
|
| 45 |
The following hyperparameters were used during training:
|
|
|
|
| 6 |
- sft
|
| 7 |
- generated_from_trainer
|
| 8 |
- pytorch
|
| 9 |
+
- Mistral
|
| 10 |
base_model: mistralai/Mistral-7B-v0.1
|
| 11 |
model-index:
|
| 12 |
- name: fennec-7b-alpha
|
|
|
|
| 20 |
pipeline_tag: text-generation
|
| 21 |
---
|
| 22 |
|
| 23 |
+
<img src="https://huggingface.co/Menouar/fennec-7b-alpha/resolve/main/fennec.jpg" alt="Fennec Logo" width="800" height="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
| 24 |
|
| 25 |
|
| 26 |
# fennec-7b-alpha
|
| 27 |
|
| 28 |
+
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on [ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k), [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback), and [gsm8k](https://huggingface.co/datasets/gsm8k) datasets.
|
| 29 |
|
| 30 |
## Model description
|
| 31 |
|
| 32 |
+
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) using supervised fine-tuning on nearly the same datasets as Zephyr-7B-beta.
|
| 33 |
|
| 34 |
+
## Training and evaluation data
|
| 35 |
|
| 36 |
+
The evaluation for training can be found [here](https://huggingface.co/Menouar/fennec-7b-alpha/tensorboard).
|
| 37 |
|
| 38 |
+
The evaluation can be found at the Hugging Face Leaderboard [here](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Menouar/fennec-7b-alpha/).
|
| 39 |
|
|
|
|
| 40 |
|
| 41 |
## Training procedure
|
| 42 |
|
| 43 |
+
Can be found [here](https://colab.research.google.com/github/menouarazib/llm/blob/main/Fennec_7B.ipynb).
|
| 44 |
+
|
| 45 |
### Training hyperparameters
|
| 46 |
|
| 47 |
The following hyperparameters were used during training:
|