End of training
Browse files- README.md +6 -1
- adapter_model.bin +1 -1
README.md
CHANGED
|
@@ -2,6 +2,7 @@
|
|
| 2 |
license: apache-2.0
|
| 3 |
library_name: peft
|
| 4 |
tags:
|
|
|
|
| 5 |
- generated_from_trainer
|
| 6 |
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
| 7 |
model-index:
|
|
@@ -110,7 +111,7 @@ fsdp_config:
|
|
| 110 |
|
| 111 |
# mixtral-pb-20e
|
| 112 |
|
| 113 |
-
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on
|
| 114 |
|
| 115 |
## Model description
|
| 116 |
|
|
@@ -156,6 +157,10 @@ The following hyperparameters were used during training:
|
|
| 156 |
- lr_scheduler_warmup_steps: 10
|
| 157 |
- num_epochs: 20
|
| 158 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 159 |
### Framework versions
|
| 160 |
|
| 161 |
- PEFT 0.7.0
|
|
|
|
| 2 |
license: apache-2.0
|
| 3 |
library_name: peft
|
| 4 |
tags:
|
| 5 |
+
- axolotl
|
| 6 |
- generated_from_trainer
|
| 7 |
base_model: mistralai/Mixtral-8x7B-Instruct-v0.1
|
| 8 |
model-index:
|
|
|
|
| 111 |
|
| 112 |
# mixtral-pb-20e
|
| 113 |
|
| 114 |
+
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) on the None dataset.
|
| 115 |
|
| 116 |
## Model description
|
| 117 |
|
|
|
|
| 157 |
- lr_scheduler_warmup_steps: 10
|
| 158 |
- num_epochs: 20
|
| 159 |
|
| 160 |
+
### Training results
|
| 161 |
+
|
| 162 |
+
|
| 163 |
+
|
| 164 |
### Framework versions
|
| 165 |
|
| 166 |
- PEFT 0.7.0
|
adapter_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 27354957
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0f3475612f15f8b931ee2d4932de793062fcd7e5aa0fe627f557edd335556679
|
| 3 |
size 27354957
|