ericNguyen0132 commited on
Commit
de8a7b0
·
1 Parent(s): 08be134

End of training

Browse files
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ metrics:
5
+ - accuracy
6
+ - f1
7
+ model-index:
8
+ - name: roberta-large-Dep-second
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # roberta-large-Dep-second
16
+
17
+ This model is a fine-tuned version of [rafalposwiata/deproberta-large-depression](https://huggingface.co/rafalposwiata/deproberta-large-depression) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.1600
20
+ - Accuracy: 0.8517
21
+ - F1: 0.9113
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-06
41
+ - train_batch_size: 8
42
+ - eval_batch_size: 8
43
+ - seed: 42
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - num_epochs: 10
47
+
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
51
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
52
+ | No log | 1.0 | 469 | 0.3551 | 0.86 | 0.9188 |
53
+ | 0.3676 | 2.0 | 938 | 0.4666 | 0.8617 | 0.9198 |
54
+ | 0.3042 | 3.0 | 1407 | 0.5818 | 0.86 | 0.9170 |
55
+ | 0.2651 | 4.0 | 1876 | 0.8291 | 0.865 | 0.9200 |
56
+ | 0.174 | 5.0 | 2345 | 0.8843 | 0.8567 | 0.9155 |
57
+ | 0.1363 | 6.0 | 2814 | 1.1669 | 0.8317 | 0.8968 |
58
+ | 0.075 | 7.0 | 3283 | 1.2803 | 0.8283 | 0.8952 |
59
+ | 0.0401 | 8.0 | 3752 | 1.0247 | 0.8617 | 0.9184 |
60
+ | 0.0301 | 9.0 | 4221 | 1.2848 | 0.83 | 0.8961 |
61
+ | 0.0281 | 10.0 | 4690 | 1.1600 | 0.8517 | 0.9113 |
62
+
63
+
64
+ ### Framework versions
65
+
66
+ - Transformers 4.30.2
67
+ - Pytorch 2.0.1+cu118
68
+ - Datasets 2.13.1
69
+ - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f2b0286aa5db231294e8811ea25cb398f527f761135cd1efd71cf3a34b7907a5
3
  size 1421587189
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:60e72d6db856a42de248c83f0c1220d7fca97452f766234e06c3cef9f1567bfd
3
  size 1421587189
runs/Jul07_16-25-16_79a78a4bf7e8/events.out.tfevents.1688747193.79a78a4bf7e8.670.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:706a4f02a4d5c89f9344012ecb22e5803f0ab583e3324b0377541f3b051329c7
3
- size 8923
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbe2e183c21ba618eb8c510ba53844362ff72260a4c5e34964db4cb1ee2e70ba
3
+ size 9646