jay0911 commited on
Commit
a16c796
·
verified ·
1 Parent(s): d64903c

Model save

Browse files
Files changed (2) hide show
  1. README.md +21 -18
  2. model.safetensors +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  library_name: transformers
3
- base_model: dmis-lab/biobert-base-cased-v1.2
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -17,14 +17,14 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  # ade_biobert_output
19
 
20
- This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.4138
23
- - Precision: 0.8945
24
- - Recall: 0.8822
25
- - F1: 0.8853
26
- - Recall Positive: 0.8887
27
- - Recall Negative: 0.8798
28
 
29
  ## Model description
30
 
@@ -44,10 +44,10 @@ More information needed
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 5e-05
47
- - train_batch_size: 4
48
- - eval_batch_size: 4
49
  - seed: 42
50
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 500
53
  - num_epochs: 10
@@ -56,17 +56,20 @@ The following hyperparameters were used during training:
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Recall Positive | Recall Negative |
58
  |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:---------------:|:---------------:|
59
- | 0.4983 | 0.1063 | 500 | 0.5789 | 0.8609 | 0.7602 | 0.7700 | 0.9858 | 0.6640 |
60
- | 0.4389 | 0.2126 | 1000 | 0.6829 | 0.8700 | 0.8639 | 0.8547 | 0.6031 | 0.9751 |
61
- | 0.5353 | 0.3189 | 1500 | 0.4000 | 0.8974 | 0.8903 | 0.8922 | 0.8862 | 0.8921 |
62
- | 0.6367 | 0.4253 | 2000 | 0.6262 | 0.4915 | 0.7011 | 0.5779 | 0.0 | 1.0 |
63
- | 0.623 | 0.5316 | 2500 | 0.6189 | 0.4915 | 0.7011 | 0.5779 | 0.0 | 1.0 |
64
- | 0.6653 | 0.6379 | 3000 | 0.6122 | 0.4915 | 0.7011 | 0.5779 | 0.0 | 1.0 |
 
 
 
65
 
66
 
67
  ### Framework versions
68
 
69
  - Transformers 4.55.0
70
- - Pytorch 2.8.0
71
  - Datasets 4.0.0
72
  - Tokenizers 0.21.4
 
1
  ---
2
  library_name: transformers
3
+ base_model: jay0911/fine-tuned-aemodel
4
  tags:
5
  - generated_from_trainer
6
  metrics:
 
17
 
18
  # ade_biobert_output
19
 
20
+ This model is a fine-tuned version of [jay0911/fine-tuned-aemodel](https://huggingface.co/jay0911/fine-tuned-aemodel) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.3619
23
+ - Precision: 0.9353
24
+ - Recall: 0.9358
25
+ - F1: 0.9355
26
+ - Recall Positive: 0.8686
27
+ - Recall Negative: 0.9613
28
 
29
  ## Model description
30
 
 
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 5e-05
47
+ - train_batch_size: 8
48
+ - eval_batch_size: 8
49
  - seed: 42
50
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
51
  - lr_scheduler_type: linear
52
  - lr_scheduler_warmup_steps: 500
53
  - num_epochs: 10
 
56
 
57
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Recall Positive | Recall Negative |
58
  |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:---------------:|:---------------:|
59
+ | 0.1921 | 0.2126 | 500 | 0.2565 | 0.9347 | 0.9332 | 0.9337 | 0.9147 | 0.9412 |
60
+ | 0.1893 | 0.4252 | 1000 | 0.2461 | 0.9409 | 0.9392 | 0.9397 | 0.9289 | 0.9436 |
61
+ | 0.2207 | 0.6378 | 1500 | 0.2583 | 0.9421 | 0.9418 | 0.9419 | 0.9104 | 0.9551 |
62
+ | 0.1706 | 0.8503 | 2000 | 0.3926 | 0.9216 | 0.9205 | 0.9183 | 0.7866 | 0.9776 |
63
+ | 0.1219 | 1.0629 | 2500 | 0.3413 | 0.9373 | 0.9354 | 0.9359 | 0.9246 | 0.9400 |
64
+ | 0.1097 | 1.2755 | 3000 | 0.3073 | 0.9453 | 0.9456 | 0.9453 | 0.8919 | 0.9685 |
65
+ | 0.1645 | 1.4881 | 3500 | 0.2700 | 0.9433 | 0.9430 | 0.9431 | 0.9118 | 0.9563 |
66
+ | 0.2348 | 1.7007 | 4000 | 0.2449 | 0.9452 | 0.9456 | 0.9452 | 0.8876 | 0.9703 |
67
+ | 0.2718 | 1.9133 | 4500 | 0.2304 | 0.9425 | 0.9426 | 0.9425 | 0.8990 | 0.9612 |
68
 
69
 
70
  ### Framework versions
71
 
72
  - Transformers 4.55.0
73
+ - Pytorch 2.6.0+cu124
74
  - Datasets 4.0.0
75
  - Tokenizers 0.21.4
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:96bfe80d122f39cb3502e6a00af00d9faa644e9bc2d3cc68ac7578a75b9cdb43
3
  size 433270768
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58feba1f90c0870701ecb3665cc9c20bc6ee7e8aa7d901476d72bc386944bf6d
3
  size 433270768