| | --- |
| | license: apache-2.0 |
| | tags: |
| | - generated_from_trainer |
| | metrics: |
| | - accuracy |
| | - precision |
| | - recall |
| | - f1 |
| | model-index: |
| | - name: bert_sm_cv_defined_summarized_4 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # bert_sm_cv_defined_summarized_4 |
| | |
| | This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 1.8001 |
| | - Accuracy: 0.801 |
| | - Precision: 0.4677 |
| | - Recall: 0.1487 |
| | - F1: 0.2257 |
| | - D-index: 1.4847 |
| | |
| | ## Model description |
| | |
| | More information needed |
| | |
| | ## Intended uses & limitations |
| | |
| | More information needed |
| | |
| | ## Training and evaluation data |
| | |
| | More information needed |
| | |
| | ## Training procedure |
| | |
| | ### Training hyperparameters |
| | |
| | The following hyperparameters were used during training: |
| | - learning_rate: 5e-05 |
| | - train_batch_size: 16 |
| | - eval_batch_size: 16 |
| | - seed: 42 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: linear |
| | - lr_scheduler_warmup_steps: 8000 |
| | - num_epochs: 20 |
| | - mixed_precision_training: Native AMP |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index | |
| | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:| |
| | | No log | 1.0 | 250 | 0.4931 | 0.805 | 0.5 | 0.0308 | 0.0580 | 1.4481 | |
| | | 0.5724 | 2.0 | 500 | 0.4850 | 0.806 | 0.5263 | 0.0513 | 0.0935 | 1.4569 | |
| | | 0.5724 | 3.0 | 750 | 0.4842 | 0.811 | 0.6 | 0.0923 | 0.16 | 1.4785 | |
| | | 0.4468 | 4.0 | 1000 | 0.4954 | 0.81 | 0.5806 | 0.0923 | 0.1593 | 1.4771 | |
| | | 0.4468 | 5.0 | 1250 | 0.5307 | 0.81 | 0.5862 | 0.0872 | 0.1518 | 1.4753 | |
| | | 0.381 | 6.0 | 1500 | 0.5312 | 0.809 | 0.5455 | 0.1231 | 0.2008 | 1.4866 | |
| | | 0.381 | 7.0 | 1750 | 0.5354 | 0.807 | 0.5161 | 0.1641 | 0.2490 | 1.4983 | |
| | | 0.283 | 8.0 | 2000 | 0.7003 | 0.811 | 0.6364 | 0.0718 | 0.1290 | 1.4712 | |
| | | 0.283 | 9.0 | 2250 | 0.7079 | 0.798 | 0.4568 | 0.1897 | 0.2681 | 1.4949 | |
| | | 0.1621 | 10.0 | 2500 | 0.9032 | 0.8 | 0.4603 | 0.1487 | 0.2248 | 1.4833 | |
| | | 0.1621 | 11.0 | 2750 | 1.0875 | 0.797 | 0.4474 | 0.1744 | 0.2509 | 1.4881 | |
| | | 0.0678 | 12.0 | 3000 | 1.2256 | 0.769 | 0.3861 | 0.3128 | 0.3456 | 1.4975 | |
| | | 0.0678 | 13.0 | 3250 | 1.6378 | 0.793 | 0.4 | 0.1231 | 0.1882 | 1.4645 | |
| | | 0.039 | 14.0 | 3500 | 1.7475 | 0.767 | 0.2841 | 0.1282 | 0.1767 | 1.4301 | |
| | | 0.039 | 15.0 | 3750 | 1.8575 | 0.804 | 0.4848 | 0.0821 | 0.1404 | 1.4652 | |
| | | 0.0295 | 16.0 | 4000 | 1.8151 | 0.775 | 0.3370 | 0.1590 | 0.2160 | 1.4522 | |
| | | 0.0295 | 17.0 | 4250 | 1.8788 | 0.795 | 0.4219 | 0.1385 | 0.2085 | 1.4728 | |
| | | 0.0416 | 18.0 | 4500 | 1.8193 | 0.765 | 0.3462 | 0.2308 | 0.2769 | 1.4636 | |
| | | 0.0416 | 19.0 | 4750 | 1.6942 | 0.788 | 0.3896 | 0.1538 | 0.2206 | 1.4685 | |
| | | 0.0322 | 20.0 | 5000 | 1.8001 | 0.801 | 0.4677 | 0.1487 | 0.2257 | 1.4847 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.28.0 |
| | - Pytorch 2.0.1+cu118 |
| | - Datasets 2.12.0 |
| | - Tokenizers 0.13.3 |
| |
|