PubMedELECTRA-LitCovid-v1.3h
This model is a fine-tuned version of microsoft/BiomedNLP-BiomedELECTRA-base-uncased-abstract on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.7982
- Hamming loss: 0.0188
- F1 micro: 0.8423
- F1 macro: 0.3558
- F1 weighted: 0.8811
- F1 samples: 0.8761
- Precision micro: 0.7670
- Precision macro: 0.2896
- Precision weighted: 0.8411
- Precision samples: 0.8611
- Recall micro: 0.9341
- Recall macro: 0.7113
- Recall weighted: 0.9341
- Recall samples: 0.9436
- Roc Auc: 0.9590
- Accuracy: 0.6885
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.001
- num_epochs: 5
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss | Hamming loss | F1 micro | F1 macro | F1 weighted | F1 samples | Precision micro | Precision macro | Precision weighted | Precision samples | Recall micro | Recall macro | Recall weighted | Recall samples | Roc Auc | Accuracy |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.2822 | 1.0 | 2272 | 0.5864 | 0.0346 | 0.7379 | 0.2698 | 0.8161 | 0.8119 | 0.6213 | 0.2132 | 0.7586 | 0.7808 | 0.9084 | 0.6858 | 0.9084 | 0.9241 | 0.9385 | 0.5320 |
| 1.069 | 2.0 | 4544 | 0.5191 | 0.0253 | 0.7977 | 0.3164 | 0.8596 | 0.8543 | 0.6975 | 0.2555 | 0.8093 | 0.8307 | 0.9314 | 0.7467 | 0.9314 | 0.9418 | 0.9543 | 0.6313 |
| 0.8986 | 3.0 | 6816 | 0.6191 | 0.0205 | 0.8306 | 0.3402 | 0.8684 | 0.8621 | 0.7449 | 0.2728 | 0.8176 | 0.8364 | 0.9385 | 0.6966 | 0.9385 | 0.9474 | 0.9601 | 0.6446 |
| 0.6693 | 4.0 | 9088 | 0.7344 | 0.0195 | 0.8371 | 0.3568 | 0.8756 | 0.8694 | 0.7587 | 0.2871 | 0.8331 | 0.8513 | 0.9334 | 0.7201 | 0.9334 | 0.9430 | 0.9583 | 0.6697 |
| 0.3752 | 5.0 | 11360 | 0.7982 | 0.0188 | 0.8423 | 0.3558 | 0.8811 | 0.8761 | 0.7670 | 0.2896 | 0.8411 | 0.8611 | 0.9341 | 0.7113 | 0.9341 | 0.9436 | 0.9590 | 0.6885 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
- Downloads last month
- -