saattrupdan commited on
Commit
ca2b196
·
1 Parent(s): 333a54d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -25
README.md CHANGED
@@ -1,40 +1,96 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  tags:
4
  - generated_from_trainer
5
  model-index:
6
- - name: verdict-classifier
7
- results: []
 
 
 
 
 
8
  ---
9
 
10
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
- should probably proofread and complete it, then remove this comment. -->
12
 
13
- # verdict-classifier
14
-
15
- This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
16
- It achieves the following results on the evaluation set:
17
  - Loss: 0.1856
18
  - F1 Macro: 0.8148
19
  - F1 Misinformation: 0.9764
20
  - F1 Factual: 0.9375
21
  - F1 Other: 0.5306
22
- - Prec Macro: 0.8117
23
- - Prec Misinformation: 0.9775
24
- - Prec Factual: 0.9375
25
- - Prec Other: 0.52
26
-
27
- ## Model description
28
-
29
- More information needed
30
-
31
- ## Intended uses & limitations
32
-
33
- More information needed
34
-
35
- ## Training and evaluation data
36
-
37
- More information needed
38
 
39
  ## Training procedure
40
 
@@ -54,7 +110,7 @@ The following hyperparameters were used during training:
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Prec Macro | Prec Misinformation | Prec Factual | Prec Other |
58
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
59
  | 0.8707 | 1.0 | 3758 | 0.2414 | 0.7832 | 0.9639 | 0.7857 | 0.6 | 0.7950 | 0.9683 | 0.9167 | 0.5 |
60
  | 0.3918 | 2.0 | 7516 | 0.1856 | 0.8148 | 0.9764 | 0.9375 | 0.5306 | 0.8117 | 0.9775 | 0.9375 | 0.52 |
 
1
  ---
2
  license: mit
3
+ language:
4
+ - am
5
+ - ar
6
+ - hy
7
+ - eu
8
+ - bn
9
+ - bs
10
+ - bg
11
+ - my
12
+ - hr
13
+ - ca
14
+ - cs
15
+ - da
16
+ - nl
17
+ - en
18
+ - et
19
+ - fi
20
+ - fr
21
+ - ka
22
+ - de
23
+ - el
24
+ - gu
25
+ - ht
26
+ - iw
27
+ - hi
28
+ - hu
29
+ - is
30
+ - in
31
+ - it
32
+ - ja
33
+ - kn
34
+ - km
35
+ - ko
36
+ - lo
37
+ - lv
38
+ - lt
39
+ - ml
40
+ - mr
41
+ - ne
42
+ - no
43
+ - or
44
+ - pa
45
+ - ps
46
+ - fa
47
+ - pl
48
+ - pt
49
+ - ro
50
+ - ru
51
+ - sr
52
+ - zh
53
+ - sd
54
+ - si
55
+ - sk
56
+ - sl
57
+ - es
58
+ - sv
59
+ - tl
60
+ - ta
61
+ - te
62
+ - th
63
+ - tr
64
+ - uk
65
+ - ur
66
+ - ug
67
+ - vi
68
+ - cy
69
  tags:
70
  - generated_from_trainer
71
  model-index:
72
+ - name: verdict-classifier-en
73
+ results:
74
+ - task:
75
+ type: text-classification
76
+ name: Verdict Classification
77
+ widget:
78
+ - "One might think that this is true, but it's taken out of context."
79
  ---
80
 
81
+ # English Verdict Classifier
 
82
 
83
+ This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on 1,500 deduplicated verdicts from [Google Fact Check Tools API](https://developers.google.com/fact-check/tools/api/reference/rest/v1alpha1/claims/search), translated into 66 languages with the [Google Cloud Translation API](https://cloud.google.com/translate/docs/reference/rest/).
84
+ It achieves the following results on the evaluation set, being 1,000 such verdicts, but here including duplicates to represent the true distribution:
 
 
85
  - Loss: 0.1856
86
  - F1 Macro: 0.8148
87
  - F1 Misinformation: 0.9764
88
  - F1 Factual: 0.9375
89
  - F1 Other: 0.5306
90
+ - Precision Macro: 0.8117
91
+ - Precision Misinformation: 0.9775
92
+ - Precision Factual: 0.9375
93
+ - Precision Other: 0.52
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
  ## Training procedure
96
 
 
110
 
111
  ### Training results
112
 
113
+ | Training Loss | Epoch | Step | Validation Loss | F1 Macro | F1 Misinformation | F1 Factual | F1 Other | Precision Macro | Precision Misinformation | Precision Factual | Precision Other |
114
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-----------------:|:----------:|:--------:|:----------:|:-------------------:|:------------:|:----------:|
115
  | 0.8707 | 1.0 | 3758 | 0.2414 | 0.7832 | 0.9639 | 0.7857 | 0.6 | 0.7950 | 0.9683 | 0.9167 | 0.5 |
116
  | 0.3918 | 2.0 | 7516 | 0.1856 | 0.8148 | 0.9764 | 0.9375 | 0.5306 | 0.8117 | 0.9775 | 0.9375 | 0.52 |