| | --- |
| | license: mit |
| | base_model: microsoft/deberta-v3-base |
| | tags: |
| | - generated_from_trainer |
| | metrics: |
| | - f1 |
| | model-index: |
| | - name: out_2 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # out_2 |
| | |
| | This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.6774 |
| | - F1: 0.7444 |
| | |
| | ## Model description |
| | |
| | More information needed |
| | |
| | ## Intended uses & limitations |
| | |
| | More information needed |
| | |
| | ## Training and evaluation data |
| | |
| | More information needed |
| | |
| | ## Training procedure |
| | |
| | ### Training hyperparameters |
| | |
| | The following hyperparameters were used during training: |
| | - learning_rate: 6e-06 |
| | - train_batch_size: 3 |
| | - eval_batch_size: 8 |
| | - seed: 42 |
| | - gradient_accumulation_steps: 16 |
| | - total_train_batch_size: 48 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: cosine |
| | - num_epochs: 5.0 |
| | - mixed_precision_training: Native AMP |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Accuracy | Validation Loss | |
| | |:-------------:|:-----:|:-----:|:--------:|:---------------:| |
| | | 0.6448 | 0.21 | 500 | 0.6347 | 0.6498 | |
| | | 0.6401 | 0.41 | 1000 | 0.6442 | 0.6312 | |
| | | 0.6557 | 0.62 | 1500 | 0.6582 | 0.6314 | |
| | | 0.5819 | 0.83 | 2000 | 0.6588 | 0.6320 | |
| | | 0.6086 | 1.04 | 2500 | 0.6563 | 0.6343 | |
| | | 0.6011 | 1.24 | 3000 | 0.6557 | 0.6165 | |
| | | 0.5616 | 1.45 | 3500 | 0.6461 | 0.6376 | |
| | | 0.5885 | 1.66 | 4000 | 0.6468 | 0.6304 | |
| | | 0.6198 | 1.87 | 4500 | 0.6423 | 0.6448 | |
| | | 0.5838 | 2.07 | 5000 | 0.6665 | 0.6320 | |
| | | 0.5564 | 2.28 | 5500 | 0.6684 | 0.6428 | |
| | | 0.5726 | 2.49 | 6000 | 0.6703 | 0.6401 | |
| | | 0.5491 | 2.7 | 6500 | 0.6684 | 0.6455 | |
| | | 0.5303 | 2.9 | 7000 | 0.6703 | 0.6339 | |
| | | 0.497 | 3.11 | 7500 | 0.6607 | 0.6541 | |
| | | 0.5041 | 3.32 | 8000 | 0.6760 | 0.6653 | |
| | | 0.4978 | 3.53 | 8500 | 0.6696 | 0.6627 | |
| | | 0.5272 | 3.73 | 9000 | 0.6677 | 0.6684 | |
| | | 0.5487 | 3.94 | 9500 | 0.6760 | 0.6593 | |
| | | 0.4998 | 4.15 | 10000 | 0.6747 | 0.6738 | |
| | | 0.4626 | 4.36 | 10500 | 0.6753 | 0.6781 | |
| | | 0.5202 | 4.56 | 11000 | 0.6722 | 0.6763 | |
| | | 0.4623 | 4.77 | 11500 | 0.6728 | 0.6778 | |
| | | 0.4383 | 4.98 | 12000 | 0.6741 | 0.6775 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.35.0.dev0 |
| | - Pytorch 2.0.1+cu117 |
| | - Datasets 2.14.4 |
| | - Tokenizers 0.14.1 |
| |
|