Jsevisal commited on
Commit
9624a5b
·
1 Parent(s): 7204c38

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -26
README.md CHANGED
@@ -1,9 +1,5 @@
1
  ---
2
  license: apache-2.0
3
- widget:
4
- - text: I'm fine. Who is this?
5
- - text: You can't take anything seriously.
6
- - text: In the end he''s going to croak, isn''t he?
7
  tags:
8
  - generated_from_trainer
9
  metrics:
@@ -14,11 +10,6 @@ metrics:
14
  model-index:
15
  - name: bert-gest-pred-seqeval-partialmatch
16
  results: []
17
- datasets:
18
- - Jsevisal/gesture_pred
19
- language:
20
- - en
21
- pipeline_tag: token-classification
22
  ---
23
 
24
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -28,11 +19,11 @@ should probably proofread and complete it, then remove this comment. -->
28
 
29
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
30
  It achieves the following results on the evaluation set:
31
- - Loss: 0.733480
32
- - Precision: 0.831599
33
- - Recall: 0.831599
34
- - F1: 0.831599
35
- - Accuracy: 0.817945
36
 
37
  ## Model description
38
 
@@ -63,21 +54,21 @@ The following hyperparameters were used during training:
63
 
64
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
65
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
66
- | 1.8591 | 1.0 | 147 | 1.1031 | 0.7523 | 0.7523 | 0.7523 | 0.7172 |
67
- | 0.8967 | 2.0 | 294 | 0.8237 | 0.8036 | 0.8036 | 0.8036 | 0.7822 |
68
- | 0.5801 | 3.0 | 441 | 0.7738 | 0.8251 | 0.8251 | 0.8251 | 0.8088 |
69
- | 0.3924 | 4.0 | 588 | 0.7335 | 0.8316 | 0.8316 | 0.8316 | 0.8179 |
70
- | 0.2704 | 5.0 | 735 | 0.7467 | 0.8459 | 0.8459 | 0.8459 | 0.8342 |
71
- | 0.1802 | 6.0 | 882 | 0.7634 | 0.8420 | 0.8420 | 0.8420 | 0.8316 |
72
- | 0.1299 | 7.0 | 1029 | 0.8104 | 0.8270 | 0.8270 | 0.8270 | 0.8147 |
73
- | 0.0968 | 8.0 | 1176 | 0.8489 | 0.8375 | 0.8375 | 0.8375 | 0.8277 |
74
- | 0.0761 | 9.0 | 1323 | 0.8539 | 0.8459 | 0.8459 | 0.8459 | 0.8362 |
75
- | 0.0663 | 10.0 | 1470 | 0.8644 | 0.8459 | 0.8459 | 0.8459 | 0.8349 |
76
 
77
 
78
  ### Framework versions
79
 
80
- - Transformers 4.26.1
81
  - Pytorch 1.13.1+cu116
82
  - Datasets 2.10.1
83
- - Tokenizers 0.13.2
 
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
 
10
  model-index:
11
  - name: bert-gest-pred-seqeval-partialmatch
12
  results: []
 
 
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
19
 
20
  This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.8238
23
+ - Precision: 0.7765
24
+ - Recall: 0.7347
25
+ - F1: 0.7289
26
+ - Accuracy: 0.8355
27
 
28
  ## Model description
29
 
 
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
56
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
57
+ | 1.8976 | 1.0 | 147 | 1.1361 | 0.4802 | 0.4141 | 0.4034 | 0.7009 |
58
+ | 0.916 | 2.0 | 294 | 0.8206 | 0.6045 | 0.5622 | 0.5493 | 0.7744 |
59
+ | 0.5893 | 3.0 | 441 | 0.7711 | 0.7318 | 0.6613 | 0.6747 | 0.7952 |
60
+ | 0.4019 | 4.0 | 588 | 0.7270 | 0.7713 | 0.7201 | 0.7277 | 0.8199 |
61
+ | 0.2713 | 5.0 | 735 | 0.7353 | 0.8000 | 0.7512 | 0.7545 | 0.8349 |
62
+ | 0.1831 | 6.0 | 882 | 0.7802 | 0.7958 | 0.7245 | 0.7375 | 0.8303 |
63
+ | 0.1343 | 7.0 | 1029 | 0.7785 | 0.7652 | 0.7351 | 0.7204 | 0.8362 |
64
+ | 0.0989 | 8.0 | 1176 | 0.8017 | 0.7753 | 0.7317 | 0.7313 | 0.8322 |
65
+ | 0.079 | 9.0 | 1323 | 0.8281 | 0.7844 | 0.7297 | 0.7325 | 0.8349 |
66
+ | 0.0673 | 10.0 | 1470 | 0.8238 | 0.7765 | 0.7347 | 0.7289 | 0.8355 |
67
 
68
 
69
  ### Framework versions
70
 
71
+ - Transformers 4.27.3
72
  - Pytorch 1.13.1+cu116
73
  - Datasets 2.10.1
74
+ - Tokenizers 0.13.2