CTEBMSP_ANAT_DISO

This model is a fine-tuned version of PlanTL-GOB-ES/bsc-bio-ehr-es on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0909
  • Anat Precision: 0.7522
  • Anat Recall: 0.7147
  • Anat F1: 0.7330
  • Anat Number: 361
  • Diso Precision: 0.8915
  • Diso Recall: 0.8919
  • Diso F1: 0.8917
  • Diso Number: 2645
  • Overall Precision: 0.8755
  • Overall Recall: 0.8706
  • Overall F1: 0.8731
  • Overall Accuracy: 0.9873

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 8

Training results

Training Loss Epoch Step Validation Loss Anat Precision Anat Recall Anat F1 Anat Number Diso Precision Diso Recall Diso F1 Diso Number Overall Precision Overall Recall Overall F1 Overall Accuracy
0.0592 1.0 2133 0.0506 0.6950 0.4986 0.5806 361 0.8635 0.8609 0.8622 2645 0.8484 0.8174 0.8326 0.9843
0.0323 2.0 4266 0.0583 0.7899 0.6039 0.6845 361 0.8780 0.8817 0.8798 2645 0.8697 0.8483 0.8589 0.9858
0.0201 3.0 6399 0.0580 0.6565 0.7147 0.6844 361 0.8598 0.8764 0.8680 2645 0.8339 0.8570 0.8453 0.9851
0.0121 4.0 8532 0.0758 0.7240 0.6759 0.6991 361 0.8976 0.8752 0.8863 2645 0.8776 0.8513 0.8642 0.9863
0.0078 5.0 10665 0.0814 0.7219 0.7119 0.7169 361 0.8776 0.8975 0.8875 2645 0.8595 0.8752 0.8673 0.9862
0.0031 6.0 12798 0.0974 0.7599 0.6399 0.6947 361 0.8895 0.8915 0.8905 2645 0.8761 0.8613 0.8686 0.9867
0.002 7.0 14931 0.0980 0.7143 0.6787 0.6960 361 0.8813 0.8957 0.8884 2645 0.8624 0.8696 0.8660 0.9860
0.0005 8.0 17064 0.0909 0.7522 0.7147 0.7330 361 0.8915 0.8919 0.8917 2645 0.8755 0.8706 0.8731 0.9873

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.13.0+cu116
  • Datasets 2.8.0
  • Tokenizers 0.13.2
Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support