BP-S02andInt03andMM05
This model is a fine-tuned version of Anwaarma/BP-S02andInt03 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5994
- Accuracy: 0.64
- F1: 0.6335
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|---|---|---|---|---|---|
| No log | 0.0 | 50 | 0.7057 | 0.5 | 0.4909 |
| No log | 0.01 | 100 | 0.6936 | 0.52 | 0.5206 |
| No log | 0.01 | 150 | 0.6981 | 0.48 | 0.4673 |
| No log | 0.01 | 200 | 0.6958 | 0.51 | 0.5107 |
| No log | 0.02 | 250 | 0.6921 | 0.54 | 0.5378 |
| No log | 0.02 | 300 | 0.6981 | 0.45 | 0.4150 |
| No log | 0.02 | 350 | 0.6895 | 0.53 | 0.5226 |
| No log | 0.03 | 400 | 0.6876 | 0.54 | 0.4112 |
| No log | 0.03 | 450 | 0.6900 | 0.53 | 0.5284 |
| 0.7105 | 0.03 | 500 | 0.6903 | 0.52 | 0.52 |
| 0.7105 | 0.04 | 550 | 0.6892 | 0.52 | 0.52 |
| 0.7105 | 0.04 | 600 | 0.6920 | 0.5 | 0.4990 |
| 0.7105 | 0.04 | 650 | 0.6912 | 0.5 | 0.5 |
| 0.7105 | 0.05 | 700 | 0.6874 | 0.53 | 0.5250 |
| 0.7105 | 0.05 | 750 | 0.6893 | 0.5 | 0.4990 |
| 0.7105 | 0.05 | 800 | 0.6814 | 0.58 | 0.5441 |
| 0.7105 | 0.06 | 850 | 0.6827 | 0.52 | 0.5206 |
| 0.7105 | 0.06 | 900 | 0.6791 | 0.55 | 0.5429 |
| 0.7105 | 0.06 | 950 | 0.6855 | 0.53 | 0.5169 |
| 0.692 | 0.07 | 1000 | 0.6745 | 0.56 | 0.4930 |
| 0.692 | 0.07 | 1050 | 0.6668 | 0.55 | 0.5503 |
| 0.692 | 0.07 | 1100 | 0.6601 | 0.59 | 0.5673 |
| 0.692 | 0.08 | 1150 | 0.6514 | 0.59 | 0.5714 |
| 0.692 | 0.08 | 1200 | 0.6679 | 0.59 | 0.5813 |
| 0.692 | 0.09 | 1250 | 0.6513 | 0.62 | 0.6181 |
| 0.692 | 0.09 | 1300 | 0.6462 | 0.6 | 0.5947 |
| 0.692 | 0.09 | 1350 | 0.6492 | 0.63 | 0.6243 |
| 0.692 | 0.1 | 1400 | 0.6375 | 0.58 | 0.5664 |
| 0.692 | 0.1 | 1450 | 0.6302 | 0.62 | 0.6192 |
| 0.6793 | 0.1 | 1500 | 0.6233 | 0.57 | 0.5654 |
| 0.6793 | 0.11 | 1550 | 0.6308 | 0.6 | 0.5992 |
| 0.6793 | 0.11 | 1600 | 0.6336 | 0.62 | 0.6205 |
| 0.6793 | 0.11 | 1650 | 0.6205 | 0.68 | 0.68 |
| 0.6793 | 0.12 | 1700 | 0.6061 | 0.66 | 0.6605 |
| 0.6793 | 0.12 | 1750 | 0.6197 | 0.65 | 0.6488 |
| 0.6793 | 0.12 | 1800 | 0.6090 | 0.66 | 0.6593 |
| 0.6793 | 0.13 | 1850 | 0.5986 | 0.59 | 0.5886 |
| 0.6793 | 0.13 | 1900 | 0.5990 | 0.66 | 0.6593 |
| 0.6793 | 0.13 | 1950 | 0.5995 | 0.66 | 0.6600 |
| 0.6632 | 0.14 | 2000 | 0.6033 | 0.6 | 0.5966 |
| 0.6632 | 0.14 | 2050 | 0.6003 | 0.66 | 0.6604 |
| 0.6632 | 0.14 | 2100 | 0.6088 | 0.62 | 0.6206 |
| 0.6632 | 0.15 | 2150 | 0.6178 | 0.65 | 0.6463 |
| 0.6632 | 0.15 | 2200 | 0.6003 | 0.63 | 0.6303 |
| 0.6632 | 0.15 | 2250 | 0.6072 | 0.61 | 0.6103 |
| 0.6632 | 0.16 | 2300 | 0.6160 | 0.68 | 0.6742 |
| 0.6632 | 0.16 | 2350 | 0.6056 | 0.62 | 0.6192 |
| 0.6632 | 0.16 | 2400 | 0.6100 | 0.62 | 0.6192 |
| 0.6632 | 0.17 | 2450 | 0.6175 | 0.67 | 0.6630 |
| 0.6485 | 0.17 | 2500 | 0.6070 | 0.66 | 0.6584 |
| 0.6485 | 0.17 | 2550 | 0.6090 | 0.66 | 0.6556 |
| 0.6485 | 0.18 | 2600 | 0.6123 | 0.59 | 0.5906 |
| 0.6485 | 0.18 | 2650 | 0.6245 | 0.64 | 0.6393 |
| 0.6485 | 0.18 | 2700 | 0.6049 | 0.66 | 0.6593 |
| 0.6485 | 0.19 | 2750 | 0.5994 | 0.64 | 0.6335 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 3