Areepatw commited on
Commit
f744fc8
·
verified ·
1 Parent(s): 5b2c956

Initial model upload

Browse files
Files changed (1) hide show
  1. README.md +11 -11
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  library_name: transformers
3
- license: apache-2.0
4
- base_model: bert-base-uncased
5
  tags:
6
  - generated_from_trainer
7
  datasets:
@@ -10,7 +10,7 @@ metrics:
10
  - accuracy
11
  - f1
12
  model-index:
13
- - name: bert-multirc
14
  results:
15
  - task:
16
  name: Text Classification
@@ -24,22 +24,22 @@ model-index:
24
  metrics:
25
  - name: Accuracy
26
  type: accuracy
27
- value: 0.574463696369637
28
  - name: F1
29
  type: f1
30
- value: 0.5000357077611722
31
  ---
32
 
33
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
34
  should probably proofread and complete it, then remove this comment. -->
35
 
36
- # bert-multirc
37
 
38
- This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the super_glue dataset.
39
  It achieves the following results on the evaluation set:
40
- - Loss: 0.6812
41
- - Accuracy: 0.5745
42
- - F1: 0.5000
43
 
44
  ## Model description
45
 
@@ -71,7 +71,7 @@ The following hyperparameters were used during training:
71
 
72
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
73
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
74
- | 0.6862 | 1.0 | 1703 | 0.6812 | 0.5745 | 0.5000 |
75
 
76
 
77
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ license: mit
4
+ base_model: xlm-roberta-base
5
  tags:
6
  - generated_from_trainer
7
  datasets:
 
10
  - accuracy
11
  - f1
12
  model-index:
13
+ - name: xlmroberta-multirc
14
  results:
15
  - task:
16
  name: Text Classification
 
24
  metrics:
25
  - name: Accuracy
26
  type: accuracy
27
+ value: 0.5719884488448845
28
  - name: F1
29
  type: f1
30
+ value: 0.4162508774824471
31
  ---
32
 
33
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
34
  should probably proofread and complete it, then remove this comment. -->
35
 
36
+ # xlmroberta-multirc
37
 
38
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the super_glue dataset.
39
  It achieves the following results on the evaluation set:
40
+ - Loss: 0.6823
41
+ - Accuracy: 0.5720
42
+ - F1: 0.4163
43
 
44
  ## Model description
45
 
 
71
 
72
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
73
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
74
+ | 0.6873 | 1.0 | 1703 | 0.6823 | 0.5720 | 0.4163 |
75
 
76
 
77
  ### Framework versions