| --- |
| license: cc-by-nc-4.0 |
| base_model: athirdpath/Harmonia-20B |
| tags: |
| - generated_from_trainer |
| model-index: |
| - name: lora |
| results: [] |
| --- |
| This was mostly a test to see what the loss/eval looked like when training on top of Harmonia, and in that sense it was a sterling success, without the "jitter" I experienced training on top of Nethena 20b. |
| Quick testing shows a bit of derpiness, but a nice conversational flow. Overall, this will be helpful in developing additional 20b merges. |
|
|
| [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
| # lora |
|
|
| This model is a fine-tuned version of [athirdpath/Harmonia-20B](https://huggingface.co/athirdpath/Harmonia-20B) on the HF No Robots dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 1.4881 |
|
|
| ## Model description |
|
|
| More information needed |
|
|
| ## Intended uses & limitations |
|
|
| More information needed |
|
|
| ## Training and evaluation data |
|
|
| More information needed |
|
|
| ## Training procedure |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 3.5e-05 |
| - train_batch_size: 3 |
| - eval_batch_size: 3 |
| - seed: 42 |
| - gradient_accumulation_steps: 3 |
| - total_train_batch_size: 9 |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| - lr_scheduler_type: cosine |
| - lr_scheduler_warmup_steps: 10 |
| - num_epochs: 3 |
|
|
| ### Training results |
|
|
| | Training Loss | Epoch | Step | Validation Loss | |
| |:-------------:|:-----:|:----:|:---------------:| |
| | 1.5598 | 0.55 | 50 | 1.5816 | |
| | 1.5384 | 1.08 | 100 | 1.5146 | |
| | 1.5362 | 1.64 | 150 | 1.4972 | |
| | 1.4234 | 2.17 | 200 | 1.4902 | |
| | 1.4678 | 2.72 | 250 | 1.4881 | |
|
|
|
|
| ### Framework versions |
|
|
| - Transformers 4.35.2 |
| - Pytorch 2.0.1+cu118 |
| - Datasets 2.15.0 |
| - Tokenizers 0.15.0 |
|
|