Jsevisal commited on
Commit
5ffc87e
·
1 Parent(s): 472b748

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -6
README.md CHANGED
@@ -1,6 +1,10 @@
1
  ---
2
  tags:
3
  - generated_from_trainer
 
 
 
 
4
  metrics:
5
  - precision
6
  - recall
@@ -9,6 +13,9 @@ metrics:
9
  model-index:
10
  - name: roberta-gest-pred-seqeval-partialmatch
11
  results: []
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,11 +25,11 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [xlm-roberta-large-finetuned-conll03-english](https://huggingface.co/xlm-roberta-large-finetuned-conll03-english) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.9160
22
- - Precision: 0.8093
23
- - Recall: 0.7469
24
- - F1: 0.7331
25
- - Accuracy: 0.8633
26
 
27
  ## Model description
28
 
@@ -70,4 +77,4 @@ The following hyperparameters were used during training:
70
  - Transformers 4.27.3
71
  - Pytorch 1.13.1+cu116
72
  - Datasets 2.10.1
73
- - Tokenizers 0.13.2
 
1
  ---
2
  tags:
3
  - generated_from_trainer
4
+ widget:
5
+ - text: I'm fine. Who is this?
6
+ - text: You can't take anything seriously.
7
+ - text: In the end he's going to croak, isn't he?
8
  metrics:
9
  - precision
10
  - recall
 
13
  model-index:
14
  - name: roberta-gest-pred-seqeval-partialmatch
15
  results: []
16
+ datasets:
17
+ - Jsevisal/gesture_pred
18
+ pipeline_tag: token-classification
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
25
 
26
  This model is a fine-tuned version of [xlm-roberta-large-finetuned-conll03-english](https://huggingface.co/xlm-roberta-large-finetuned-conll03-english) on the None dataset.
27
  It achieves the following results on the evaluation set:
28
+ - Loss: 0.6258
29
+ - Precision: 0.7927
30
+ - Recall: 0.7354
31
+ - F1: 0.7381
32
+ - Accuracy: 0.8323
33
 
34
  ## Model description
35
 
 
77
  - Transformers 4.27.3
78
  - Pytorch 1.13.1+cu116
79
  - Datasets 2.10.1
80
+ - Tokenizers 0.13.2