bert-base-uncased_1_16619_token_headwise
This model is a fine-tuned version of bert-base-uncased on the bionlp2004 dataset. It achieves the following results on the evaluation set:
- Loss: 0.2108
- Precision: 0.7761
- Recall: 0.8165
- F1: 0.7958
- Accuracy: 0.9460
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|---|---|---|---|---|---|---|---|
| 0.2219 | 1.0 | 1039 | 0.1919 | 0.7012 | 0.7769 | 0.7371 | 0.9361 |
| 0.1974 | 2.0 | 2078 | 0.1849 | 0.7284 | 0.7661 | 0.7468 | 0.9400 |
| 0.1868 | 3.0 | 3117 | 0.1791 | 0.7150 | 0.7880 | 0.7498 | 0.9407 |
| 0.1815 | 4.0 | 4156 | 0.1716 | 0.7576 | 0.7943 | 0.7756 | 0.9442 |
| 0.1766 | 5.0 | 5195 | 0.1727 | 0.7305 | 0.7843 | 0.7564 | 0.9425 |
| 0.1685 | 6.0 | 6234 | 0.1712 | 0.7378 | 0.7965 | 0.7660 | 0.9438 |
| 0.1672 | 7.0 | 7273 | 0.1742 | 0.7504 | 0.8255 | 0.7861 | 0.9443 |
| 0.1645 | 8.0 | 8312 | 0.1634 | 0.7545 | 0.8070 | 0.7798 | 0.9458 |
| 0.1629 | 9.0 | 9351 | 0.1672 | 0.7651 | 0.7970 | 0.7807 | 0.9456 |
| 0.1552 | 10.0 | 10390 | 0.1611 | 0.7567 | 0.8230 | 0.7885 | 0.9463 |
| 0.1548 | 11.0 | 11429 | 0.1689 | 0.7386 | 0.8026 | 0.7693 | 0.9439 |
| 0.1466 | 12.0 | 12468 | 0.1640 | 0.7539 | 0.8255 | 0.7881 | 0.9455 |
| 0.1509 | 13.0 | 13507 | 0.1639 | 0.7642 | 0.8172 | 0.7898 | 0.9466 |
| 0.1476 | 14.0 | 14546 | 0.1620 | 0.7580 | 0.8104 | 0.7833 | 0.9459 |
| 0.1461 | 15.0 | 15585 | 0.1604 | 0.7575 | 0.8214 | 0.7881 | 0.9458 |
| 0.1423 | 16.0 | 16624 | 0.1713 | 0.7540 | 0.8214 | 0.7862 | 0.9448 |
| 0.1413 | 17.0 | 17663 | 0.1626 | 0.7793 | 0.8075 | 0.7931 | 0.9470 |
| 0.1354 | 18.0 | 18702 | 0.1663 | 0.7599 | 0.8122 | 0.7852 | 0.9450 |
| 0.1316 | 19.0 | 19741 | 0.1720 | 0.7642 | 0.8019 | 0.7826 | 0.9439 |
| 0.1313 | 20.0 | 20780 | 0.1660 | 0.7812 | 0.8055 | 0.7932 | 0.9462 |
| 0.129 | 21.0 | 21819 | 0.1710 | 0.7667 | 0.8226 | 0.7937 | 0.9463 |
| 0.1267 | 22.0 | 22858 | 0.1671 | 0.7606 | 0.8278 | 0.7928 | 0.9455 |
| 0.1237 | 23.0 | 23897 | 0.1664 | 0.7567 | 0.8251 | 0.7895 | 0.9448 |
| 0.1205 | 24.0 | 24936 | 0.1701 | 0.7701 | 0.8145 | 0.7917 | 0.9457 |
| 0.1189 | 25.0 | 25975 | 0.1710 | 0.7652 | 0.8215 | 0.7924 | 0.9457 |
| 0.1194 | 26.0 | 27014 | 0.1677 | 0.7715 | 0.8302 | 0.7998 | 0.9469 |
| 0.1165 | 27.0 | 28053 | 0.1782 | 0.7731 | 0.8151 | 0.7935 | 0.9455 |
| 0.114 | 28.0 | 29092 | 0.1777 | 0.7618 | 0.8106 | 0.7854 | 0.9450 |
| 0.111 | 29.0 | 30131 | 0.1686 | 0.7681 | 0.8131 | 0.7900 | 0.9458 |
| 0.1081 | 30.0 | 31170 | 0.1707 | 0.7705 | 0.8217 | 0.7953 | 0.9471 |
| 0.1044 | 31.0 | 32209 | 0.1677 | 0.7777 | 0.8302 | 0.8031 | 0.9469 |
| 0.1042 | 32.0 | 33248 | 0.1785 | 0.7670 | 0.8215 | 0.7933 | 0.9467 |
| 0.1015 | 33.0 | 34287 | 0.1789 | 0.7685 | 0.8237 | 0.7951 | 0.9460 |
| 0.0996 | 34.0 | 35326 | 0.1771 | 0.7759 | 0.8205 | 0.7975 | 0.9459 |
| 0.0957 | 35.0 | 36365 | 0.1829 | 0.7741 | 0.8286 | 0.8004 | 0.9463 |
| 0.0938 | 36.0 | 37404 | 0.1834 | 0.7775 | 0.8244 | 0.8003 | 0.9461 |
| 0.0891 | 37.0 | 38443 | 0.1820 | 0.7682 | 0.8322 | 0.7989 | 0.9460 |
| 0.0874 | 38.0 | 39482 | 0.1905 | 0.7838 | 0.7999 | 0.7918 | 0.9448 |
| 0.087 | 39.0 | 40521 | 0.1915 | 0.7773 | 0.8165 | 0.7964 | 0.9460 |
| 0.0849 | 40.0 | 41560 | 0.1864 | 0.7734 | 0.8120 | 0.7922 | 0.9463 |
| 0.0819 | 41.0 | 42599 | 0.2028 | 0.7829 | 0.8093 | 0.7959 | 0.9464 |
| 0.0788 | 42.0 | 43638 | 0.1891 | 0.7689 | 0.8201 | 0.7937 | 0.9466 |
| 0.0776 | 43.0 | 44677 | 0.1941 | 0.7734 | 0.8259 | 0.7987 | 0.9469 |
| 0.0757 | 44.0 | 45716 | 0.2037 | 0.7794 | 0.8032 | 0.7911 | 0.9449 |
| 0.074 | 45.0 | 46755 | 0.2026 | 0.7660 | 0.8194 | 0.7918 | 0.9462 |
| 0.0722 | 46.0 | 47794 | 0.2089 | 0.7731 | 0.8147 | 0.7933 | 0.9460 |
| 0.0703 | 47.0 | 48833 | 0.2083 | 0.7772 | 0.8122 | 0.7943 | 0.9461 |
| 0.0687 | 48.0 | 49872 | 0.2077 | 0.7781 | 0.8179 | 0.7975 | 0.9464 |
| 0.0666 | 49.0 | 50911 | 0.2089 | 0.7747 | 0.8178 | 0.7956 | 0.9465 |
| 0.0669 | 50.0 | 51950 | 0.2108 | 0.7761 | 0.8165 | 0.7958 | 0.9460 |
Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support