Areepatw commited on
Commit
64da771
·
verified ·
1 Parent(s): f75080b

Initial model upload

Browse files
Files changed (1) hide show
  1. README.md +33 -10
README.md CHANGED
@@ -1,22 +1,45 @@
1
  ---
2
  library_name: transformers
3
- license: mit
4
- base_model: roberta-base
5
  tags:
6
  - generated_from_trainer
 
 
 
 
 
7
  model-index:
8
- - name: roberta-sst2
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # roberta-sst2
16
 
17
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.2146
 
 
20
 
21
  ## Model description
22
 
@@ -46,9 +69,9 @@ The following hyperparameters were used during training:
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:-----:|:----:|:---------------:|
51
- | 0.1975 | 1.0 | 4210 | 0.2146 |
52
 
53
 
54
  ### Framework versions
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ base_model: bert-base-multilingual-uncased
5
  tags:
6
  - generated_from_trainer
7
+ datasets:
8
+ - super_glue
9
+ metrics:
10
+ - accuracy
11
+ - f1
12
  model-index:
13
+ - name: mbert-multirc
14
+ results:
15
+ - task:
16
+ name: Text Classification
17
+ type: text-classification
18
+ dataset:
19
+ name: super_glue
20
+ type: super_glue
21
+ config: multirc
22
+ split: validation
23
+ args: multirc
24
+ metrics:
25
+ - name: Accuracy
26
+ type: accuracy
27
+ value: 0.5759075907590759
28
+ - name: F1
29
+ type: f1
30
+ value: 0.5048127206005825
31
  ---
32
 
33
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
34
  should probably proofread and complete it, then remove this comment. -->
35
 
36
+ # mbert-multirc
37
 
38
+ This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the super_glue dataset.
39
  It achieves the following results on the evaluation set:
40
+ - Loss: 0.6812
41
+ - Accuracy: 0.5759
42
+ - F1: 0.5048
43
 
44
  ## Model description
45
 
 
69
 
70
  ### Training results
71
 
72
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
73
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
74
+ | 0.6862 | 1.0 | 1703 | 0.6812 | 0.5759 | 0.5048 |
75
 
76
 
77
  ### Framework versions