| --- |
| license: mit |
| base_model: prajjwal1/bert-tiny |
| tags: |
| - generated_from_trainer |
| metrics: |
| - accuracy |
| - f1 |
| model-index: |
| - name: INT03-PC |
| results: [] |
| --- |
| |
| <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| should probably proofread and complete it, then remove this comment. --> |
|
|
| # INT03-PC |
|
|
| This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on an unknown dataset. |
| It achieves the following results on the evaluation set: |
| - Loss: 0.0158 |
| - Accuracy: 1.0 |
| - F1: 1.0 |
|
|
| ## Model description |
|
|
| More information needed |
|
|
| ## Intended uses & limitations |
|
|
| More information needed |
|
|
| ## Training and evaluation data |
|
|
| More information needed |
|
|
| ## Training procedure |
|
|
| ### Training hyperparameters |
|
|
| The following hyperparameters were used during training: |
| - learning_rate: 5e-05 |
| - train_batch_size: 16 |
| - eval_batch_size: 16 |
| - seed: 42 |
| - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| - lr_scheduler_type: linear |
| - num_epochs: 20 |
|
|
| ### Training results |
|
|
| | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |
| |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| |
| | No log | 0.0 | 50 | 0.6785 | 0.74 | 0.7413 | |
| | No log | 0.01 | 100 | 0.6047 | 0.79 | 0.7903 | |
| | No log | 0.01 | 150 | 0.4424 | 0.88 | 0.8769 | |
| | No log | 0.02 | 200 | 0.3601 | 0.89 | 0.8868 | |
| | No log | 0.02 | 250 | 0.3436 | 0.89 | 0.8868 | |
| | No log | 0.03 | 300 | 0.3311 | 0.9 | 0.8975 | |
| | No log | 0.03 | 350 | 0.3145 | 0.89 | 0.8876 | |
| | No log | 0.04 | 400 | 0.3113 | 0.9 | 0.8982 | |
| | No log | 0.04 | 450 | 0.2994 | 0.9 | 0.8982 | |
| | 0.4886 | 0.05 | 500 | 0.2806 | 0.88 | 0.8785 | |
| | 0.4886 | 0.05 | 550 | 0.2179 | 0.92 | 0.9194 | |
| | 0.4886 | 0.06 | 600 | 0.2388 | 0.88 | 0.8803 | |
| | 0.4886 | 0.06 | 650 | 0.1716 | 0.94 | 0.9398 | |
| | 0.4886 | 0.07 | 700 | 0.1774 | 0.93 | 0.9302 | |
| | 0.4886 | 0.07 | 750 | 0.1456 | 0.95 | 0.9501 | |
| | 0.4886 | 0.08 | 800 | 0.1518 | 0.94 | 0.9402 | |
| | 0.4886 | 0.08 | 850 | 0.1564 | 0.93 | 0.9303 | |
| | 0.4886 | 0.09 | 900 | 0.1684 | 0.92 | 0.9204 | |
| | 0.4886 | 0.09 | 950 | 0.1372 | 0.96 | 0.9602 | |
| | 0.2446 | 0.1 | 1000 | 0.1368 | 0.94 | 0.9402 | |
| | 0.2446 | 0.1 | 1050 | 0.1502 | 0.95 | 0.9502 | |
| | 0.2446 | 0.11 | 1100 | 0.1385 | 0.95 | 0.9502 | |
| | 0.2446 | 0.11 | 1150 | 0.1297 | 0.96 | 0.9602 | |
| | 0.2446 | 0.12 | 1200 | 0.1917 | 0.95 | 0.9502 | |
| | 0.2446 | 0.12 | 1250 | 0.1042 | 0.97 | 0.9700 | |
| | 0.2446 | 0.13 | 1300 | 0.1502 | 0.96 | 0.9602 | |
| | 0.2446 | 0.13 | 1350 | 0.1436 | 0.96 | 0.9602 | |
| | 0.2446 | 0.14 | 1400 | 0.0896 | 0.98 | 0.9800 | |
| | 0.2446 | 0.14 | 1450 | 0.1045 | 0.96 | 0.9602 | |
| | 0.1824 | 0.15 | 1500 | 0.1269 | 0.96 | 0.9602 | |
| | 0.1824 | 0.15 | 1550 | 0.1449 | 0.96 | 0.9602 | |
| | 0.1824 | 0.16 | 1600 | 0.1311 | 0.96 | 0.9602 | |
| | 0.1824 | 0.16 | 1650 | 0.1380 | 0.96 | 0.9602 | |
| | 0.1824 | 0.17 | 1700 | 0.1466 | 0.96 | 0.9602 | |
| | 0.1824 | 0.17 | 1750 | 0.0861 | 0.98 | 0.9800 | |
| | 0.1824 | 0.18 | 1800 | 0.1323 | 0.96 | 0.9602 | |
| | 0.1824 | 0.18 | 1850 | 0.1375 | 0.96 | 0.9602 | |
| | 0.1824 | 0.19 | 1900 | 0.1719 | 0.96 | 0.9602 | |
| | 0.1824 | 0.19 | 1950 | 0.0837 | 0.98 | 0.9800 | |
| | 0.1252 | 0.2 | 2000 | 0.1661 | 0.96 | 0.9602 | |
| | 0.1252 | 0.2 | 2050 | 0.1129 | 0.96 | 0.9602 | |
| | 0.1252 | 0.21 | 2100 | 0.0603 | 0.98 | 0.9800 | |
| | 0.1252 | 0.21 | 2150 | 0.1231 | 0.96 | 0.9602 | |
| | 0.1252 | 0.22 | 2200 | 0.1363 | 0.96 | 0.9602 | |
| | 0.1252 | 0.22 | 2250 | 0.1330 | 0.96 | 0.9602 | |
| | 0.1252 | 0.23 | 2300 | 0.0795 | 0.96 | 0.9602 | |
| | 0.1252 | 0.23 | 2350 | 0.0856 | 0.96 | 0.9602 | |
|
|
|
|
| ### Framework versions |
|
|
| - Transformers 4.35.2 |
| - Pytorch 2.1.0+cu121 |
| - Datasets 2.16.0 |
| - Tokenizers 0.15.0 |
|
|