File size: 3,038 Bytes
8df681e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
license: apache-2.0
base_model: PlanTL-GOB-ES/bsc-bio-ehr-es
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: vih_explainability
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# vih_explainability

This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3980
- Roc Auc: 0.8920
- Ap Score: 0.8575
- Precision: 0.8926
- Recall: 0.8920
- F1: 0.8919

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 9e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Roc Auc | Ap Score | Precision | Recall | F1     |
|:-------------:|:------:|:----:|:---------------:|:-------:|:--------:|:---------:|:------:|:------:|
| 0.62          | 0.5376 | 50   | 0.5331          | 0.7789  | 0.7389   | 0.7905    | 0.7789 | 0.7763 |
| 0.5106        | 1.0753 | 100  | 0.4343          | 0.7899  | 0.7614   | 0.8118    | 0.7899 | 0.7856 |
| 0.3762        | 1.6129 | 150  | 0.3364          | 0.8594  | 0.8075   | 0.8596    | 0.8594 | 0.8594 |
| 0.2878        | 2.1505 | 200  | 0.3582          | 0.8597  | 0.8260   | 0.8636    | 0.8597 | 0.8591 |
| 0.2556        | 2.6882 | 250  | 0.3121          | 0.8706  | 0.8440   | 0.8764    | 0.8706 | 0.8698 |
| 0.165         | 3.2258 | 300  | 0.3746          | 0.8652  | 0.8349   | 0.8699    | 0.8652 | 0.8645 |
| 0.2125        | 3.7634 | 350  | 0.3842          | 0.8815  | 0.8629   | 0.8898    | 0.8815 | 0.8805 |
| 0.1923        | 4.3011 | 400  | 0.3178          | 0.9080  | 0.8662   | 0.9086    | 0.9080 | 0.9081 |
| 0.1333        | 4.8387 | 450  | 0.3397          | 0.8704  | 0.8297   | 0.8709    | 0.8704 | 0.8702 |
| 0.137         | 5.3763 | 500  | 0.3369          | 0.9028  | 0.8718   | 0.9034    | 0.9028 | 0.9027 |
| 0.1103        | 5.9140 | 550  | 0.3493          | 0.9025  | 0.8545   | 0.9045    | 0.9025 | 0.9026 |
| 0.0896        | 6.4516 | 600  | 0.4059          | 0.8813  | 0.8507   | 0.8838    | 0.8813 | 0.8809 |
| 0.0573        | 6.9892 | 650  | 0.3956          | 0.8813  | 0.8470   | 0.8826    | 0.8813 | 0.8810 |
| 0.0716        | 7.5269 | 700  | 0.5566          | 0.8815  | 0.8674   | 0.8926    | 0.8815 | 0.8803 |
| 0.0893        | 8.0645 | 750  | 0.3980          | 0.8920  | 0.8575   | 0.8926    | 0.8920 | 0.8919 |


### Framework versions

- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1