Liu-Xiang commited on
Commit
3e9a60e
·
verified ·
1 Parent(s): 2536375

End of training

Browse files
README.md CHANGED
@@ -17,8 +17,8 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.4645
21
- - F1: 0.9171
22
 
23
  ## Model description
24
 
@@ -38,8 +38,8 @@ More information needed
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 5e-05
41
- - train_batch_size: 32
42
- - eval_batch_size: 16
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
@@ -49,14 +49,14 @@ The following hyperparameters were used during training:
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | F1 |
51
  |:-------------:|:-----:|:----:|:---------------:|:------:|
52
- | 3.5614 | 1.0 | 313 | 1.4645 | 0.7468 |
53
- | 0.8361 | 2.0 | 626 | 0.6110 | 0.9055 |
54
- | 0.4892 | 3.0 | 939 | 0.4645 | 0.9171 |
55
 
56
 
57
  ### Framework versions
58
 
59
  - Transformers 4.36.0
60
- - Pytorch 2.2.1+cu121
61
- - Datasets 2.18.0
62
  - Tokenizers 0.15.2
 
17
 
18
  This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.7747
21
+ - F1: 0.8920
22
 
23
  ## Model description
24
 
 
38
 
39
  The following hyperparameters were used during training:
40
  - learning_rate: 5e-05
41
+ - train_batch_size: 64
42
+ - eval_batch_size: 32
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss | F1 |
51
  |:-------------:|:-----:|:----:|:---------------:|:------:|
52
+ | No log | 1.0 | 157 | 1.9070 | 0.6876 |
53
+ | 2.8337 | 2.0 | 314 | 0.9826 | 0.8615 |
54
+ | 1.054 | 3.0 | 471 | 0.7747 | 0.8920 |
55
 
56
 
57
  ### Framework versions
58
 
59
  - Transformers 4.36.0
60
+ - Pytorch 2.0.1+cu118
61
+ - Datasets 2.20.0
62
  - Tokenizers 0.15.2
logs/events.out.tfevents.1721054605.llm-dpo-workbench-0.278.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d425e095ebfc96d4f5db6e2472e585cb8805c52f3cb911979b18727009e6ab7
3
- size 10473
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:636cd4a45eede3d594c20d5f3f63acc871c6b00026a0a3902d90f090c3765aa1
3
+ size 11301
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c882408fe0457f059e7ebc3d61848a1d785fbea9736254f4274e61713b3e4437
3
  size 438189348
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3ba795cc48c1f6cc0ae0a8f0bd4b76fdf5328615c07d611a454dcba2755a550
3
  size 438189348