Mardiyyah commited on
Commit
b8d96f5
·
verified ·
1 Parent(s): 24a4a18

Model save

Browse files
Files changed (4) hide show
  1. README.md +17 -10
  2. config.json +4 -4
  3. model.safetensors +1 -1
  4. training_args.bin +1 -1
README.md CHANGED
@@ -1,7 +1,5 @@
1
  ---
2
  library_name: transformers
3
- language:
4
- - en
5
  license: apache-2.0
6
  base_model: Mardiyyah/variant-tapt_freeze_llrd-LR_5e-05
7
  tags:
@@ -21,13 +19,13 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  # variant_tapt_freeze_llrd_LR_5e
23
 
24
- This model is a fine-tuned version of [Mardiyyah/variant-tapt_freeze_llrd-LR_5e-05](https://huggingface.co/Mardiyyah/variant-tapt_freeze_llrd-LR_5e-05) on the OTAR3088/Variants_IOB-V2 dataset.
25
  It achieves the following results on the evaluation set:
26
- - Loss: 0.3584
27
- - Precision: 0.3820
28
- - Recall: 0.1575
29
- - F1: 0.2231
30
- - Accuracy: 0.8545
31
 
32
  ## Model description
33
 
@@ -53,14 +51,23 @@ The following hyperparameters were used during training:
53
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
54
  - lr_scheduler_type: linear
55
  - lr_scheduler_warmup_ratio: 0.1
56
- - num_epochs: 1
57
  - mixed_precision_training: Native AMP
58
 
59
  ### Training results
60
 
61
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
62
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
63
- | 0.5777 | 1.0 | 34 | 0.3566 | 0.3820 | 0.1575 | 0.2231 | 0.8545 |
 
 
 
 
 
 
 
 
 
64
 
65
 
66
  ### Framework versions
 
1
  ---
2
  library_name: transformers
 
 
3
  license: apache-2.0
4
  base_model: Mardiyyah/variant-tapt_freeze_llrd-LR_5e-05
5
  tags:
 
19
 
20
  # variant_tapt_freeze_llrd_LR_5e
21
 
22
+ This model is a fine-tuned version of [Mardiyyah/variant-tapt_freeze_llrd-LR_5e-05](https://huggingface.co/Mardiyyah/variant-tapt_freeze_llrd-LR_5e-05) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.0616
25
+ - Precision: 0.8462
26
+ - Recall: 0.8618
27
+ - F1: 0.8539
28
+ - Accuracy: 0.9862
29
 
30
  ## Model description
31
 
 
51
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 10
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Training results
58
 
59
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
60
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
61
+ | 0.5514 | 1.0 | 34 | 0.2347 | 0.2767 | 0.2028 | 0.2340 | 0.9168 |
62
+ | 0.1451 | 2.0 | 68 | 0.0934 | 0.6139 | 0.7327 | 0.6681 | 0.9716 |
63
+ | 0.0725 | 3.0 | 102 | 0.0718 | 0.8148 | 0.8111 | 0.8129 | 0.9812 |
64
+ | 0.0485 | 4.0 | 136 | 0.0651 | 0.8136 | 0.8249 | 0.8192 | 0.9831 |
65
+ | 0.0321 | 5.0 | 170 | 0.0605 | 0.8378 | 0.8571 | 0.8474 | 0.9850 |
66
+ | 0.0243 | 6.0 | 204 | 0.0646 | 0.8493 | 0.8571 | 0.8532 | 0.9852 |
67
+ | 0.0181 | 7.0 | 238 | 0.0641 | 0.8468 | 0.8664 | 0.8565 | 0.9864 |
68
+ | 0.0161 | 8.0 | 272 | 0.0603 | 0.8584 | 0.8664 | 0.8624 | 0.9858 |
69
+ | 0.0125 | 9.0 | 306 | 0.0584 | 0.8386 | 0.8618 | 0.85 | 0.9858 |
70
+ | 0.0118 | 10.0 | 340 | 0.0616 | 0.8462 | 0.8618 | 0.8539 | 0.9862 |
71
 
72
 
73
  ### Framework versions
config.json CHANGED
@@ -9,16 +9,16 @@
9
  "hidden_dropout_prob": 0.1,
10
  "hidden_size": 768,
11
  "id2label": {
12
- "0": "B-mutant",
13
- "1": "O",
14
  "2": "I-mutant"
15
  },
16
  "initializer_range": 0.02,
17
  "intermediate_size": 3072,
18
  "label2id": {
19
- "B-mutant": 0,
20
  "I-mutant": 2,
21
- "O": 1
22
  },
23
  "layer_norm_eps": 1e-12,
24
  "max_position_embeddings": 512,
 
9
  "hidden_dropout_prob": 0.1,
10
  "hidden_size": 768,
11
  "id2label": {
12
+ "0": "O",
13
+ "1": "B-mutant",
14
  "2": "I-mutant"
15
  },
16
  "initializer_range": 0.02,
17
  "intermediate_size": 3072,
18
  "label2id": {
19
+ "B-mutant": 1,
20
  "I-mutant": 2,
21
+ "O": 0
22
  },
23
  "layer_norm_eps": 1e-12,
24
  "max_position_embeddings": 512,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:64f9156291715821f26bc666a36baebf3fcff0896b9befe284e20db2589e59d6
3
  size 439531324
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d4c77af75d457edb383e59e494800eb4bce0055d07bb513c09025466afadefb
3
  size 439531324
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2a852e038246ba5d9a152e2353db116322de332a574f47bcf97fe1c717f4a1a7
3
  size 5688
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d5c0fe58e4f6d36fb05c04bedfae045dc2f3ef364059dd8d9a7e02f1d369cf4
3
  size 5688