| | --- |
| | license: apache-2.0 |
| | base_model: bert-base-cased |
| | tags: |
| | - generated_from_trainer |
| | model-index: |
| | - name: test-bert |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # test-bert |
| |
|
| | This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 6.0267 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 3e-05 |
| | - train_batch_size: 16 |
| | - eval_batch_size: 16 |
| | - seed: 42 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: linear |
| | - num_epochs: 3 |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | |
| | |:-------------:|:-----:|:----:|:---------------:| |
| | | 5.8219 | 0.09 | 5 | 5.9435 | |
| | | 4.9256 | 0.18 | 10 | 6.1109 | |
| | | 4.3213 | 0.27 | 15 | 5.7204 | |
| | | 3.5947 | 0.36 | 20 | 5.7525 | |
| | | 3.0974 | 0.45 | 25 | 5.6447 | |
| | | 2.7481 | 0.55 | 30 | 5.2776 | |
| | | 2.207 | 0.64 | 35 | 5.3963 | |
| | | 1.8922 | 0.73 | 40 | 5.4622 | |
| | | 1.7034 | 0.82 | 45 | 5.3710 | |
| | | 1.4313 | 0.91 | 50 | 5.3449 | |
| | | 1.0748 | 1.0 | 55 | 5.3580 | |
| | | 1.0215 | 1.09 | 60 | 5.4713 | |
| | | 0.7634 | 1.18 | 65 | 5.5980 | |
| | | 0.7535 | 1.27 | 70 | 5.6049 | |
| | | 0.6063 | 1.36 | 75 | 5.5830 | |
| | | 0.4824 | 1.45 | 80 | 5.6753 | |
| | | 0.48 | 1.55 | 85 | 5.7216 | |
| | | 0.4884 | 1.64 | 90 | 5.7817 | |
| | | 0.5813 | 1.73 | 95 | 5.9427 | |
| | | 0.4287 | 1.82 | 100 | 6.0148 | |
| | | 0.4061 | 1.91 | 105 | 5.8752 | |
| | | 0.55 | 2.0 | 110 | 5.8723 | |
| | | 0.4631 | 2.09 | 115 | 5.8288 | |
| | | 0.2987 | 2.18 | 120 | 5.8808 | |
| | | 0.3359 | 2.27 | 125 | 5.8884 | |
| | | 0.3002 | 2.36 | 130 | 5.9442 | |
| | | 0.299 | 2.45 | 135 | 5.9331 | |
| | | 0.3393 | 2.55 | 140 | 5.9454 | |
| | | 0.2656 | 2.64 | 145 | 5.9894 | |
| | | 0.3582 | 2.73 | 150 | 6.0142 | |
| | | 0.2111 | 2.82 | 155 | 6.0329 | |
| | | 0.2574 | 2.91 | 160 | 6.0304 | |
| | | 0.2471 | 3.0 | 165 | 6.0267 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.34.1 |
| | - Pytorch 2.1.0+cu118 |
| | - Datasets 2.14.5 |
| | - Tokenizers 0.14.1 |
| |
|