| --- |
| tags: |
| - generated_from_trainer |
| datasets: |
| - odinsynth_sequence_dataset |
| metrics: |
| - accuracy |
| model-index: |
| - name: odinsynth_encoder_decoder_native_hf_test_2 |
| results: |
| - task: |
| name: Sequence-to-sequence Language Modeling |
| type: text2text-generation |
| dataset: |
| name: odinsynth_sequence_dataset |
| type: odinsynth_sequence_dataset |
| config: synthetic_surface |
| split: validation |
| args: synthetic_surface |
| metrics: |
| - name: Accuracy |
| type: accuracy |
| value: 0.934322390845116 |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # odinsynth_encoder_decoder_native_hf_test_2 |
|
|
| This model is a fine-tuned version of [](https://huggingface.co/) on the odinsynth_sequence_dataset dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.0771 |
| - Accuracy: 0.9343 |
|
|
| ## Model description |
|
|
| More information needed |
|
|
| ## Intended uses & limitations |
|
|
| More information needed |
|
|
| ## Training and evaluation data |
|
|
| More information needed |
|
|
| ## Training procedure |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 5e-05 |
| - train_batch_size: 3 |
| - eval_batch_size: 3 |
| - seed: 42 |
| - gradient_accumulation_steps: 200 |
| - total_train_batch_size: 600 |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| - lr_scheduler_type: linear |
| - num_epochs: 20.0 |
| |
| ### Training results |
| |
| | Training Loss | Epoch | Step | Validation Loss | Accuracy | |
| |:-------------:|:-----:|:----:|:---------------:|:--------:| |
| | 0.1612 | 0.67 | 60 | 0.1145 | 0.9376 | |
| | 0.0666 | 1.34 | 120 | 0.0628 | 0.9356 | |
| | 0.0599 | 2.01 | 180 | 0.0611 | 0.9355 | |
| | 0.0563 | 2.68 | 240 | 0.0631 | 0.9352 | |
| | 0.0512 | 3.35 | 300 | 0.0630 | 0.9347 | |
| | 0.0472 | 4.02 | 360 | 0.0638 | 0.9338 | |
| | 0.0438 | 4.69 | 420 | 0.0655 | 0.9339 | |
| | 0.0405 | 5.36 | 480 | 0.0660 | 0.9345 | |
| | 0.0378 | 6.03 | 540 | 0.0666 | 0.9342 | |
| | 0.0344 | 6.69 | 600 | 0.0669 | 0.9343 | |
| | 0.0323 | 7.36 | 660 | 0.0678 | 0.9344 | |
| | 0.0307 | 8.03 | 720 | 0.0694 | 0.9343 | |
| | 0.0294 | 8.7 | 780 | 0.0706 | 0.9345 | |
| | 0.0286 | 9.37 | 840 | 0.0725 | 0.9342 | |
| | 0.0275 | 10.04 | 900 | 0.0727 | 0.9343 | |
| | 0.0282 | 10.71 | 960 | 0.0732 | 0.9342 | |
| | 0.0264 | 11.38 | 1020 | 0.0735 | 0.9343 | |
| | 0.026 | 12.05 | 1080 | 0.0750 | 0.9342 | |
| | 0.0254 | 12.72 | 1140 | 0.0753 | 0.9343 | |
| | 0.0244 | 13.39 | 1200 | 0.0746 | 0.9344 | |
| | 0.0242 | 14.06 | 1260 | 0.0752 | 0.9343 | |
| | 0.024 | 14.73 | 1320 | 0.0758 | 0.9342 | |
| | 0.0239 | 15.4 | 1380 | 0.0764 | 0.9343 | |
| | 0.0234 | 16.07 | 1440 | 0.0763 | 0.9343 | |
| | 0.0231 | 16.74 | 1500 | 0.0764 | 0.9343 | |
| | 0.0226 | 17.41 | 1560 | 0.0770 | 0.9343 | |
| | 0.023 | 18.08 | 1620 | 0.0770 | 0.9343 | |
| | 0.0227 | 18.74 | 1680 | 0.0771 | 0.9343 | |
| | 0.0221 | 19.41 | 1740 | 0.0771 | 0.9343 | |
| |
| |
| ### Framework versions |
| |
| - Transformers 4.27.4 |
| - Pytorch 2.0.0 |
| - Datasets 2.11.0 |
| - Tokenizers 0.11.0 |
| |