Sami92 commited on
Commit
1380e35
·
verified ·
1 Parent(s): 49e9598

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -60,13 +60,14 @@ The training proceeded in two steps. First, the model was trained on a weakly an
60
  The weak annotation was performed using GPT-4o. The prompt for labeling the data can be found [here](https://huggingface.co/Sami92/XLM-R-Large-ClaimDetection/blob/main/FactualityPrompt_GPT.txt). The data was taken from Telegram. More specifically from a set of about 200 channels that have been subject to a fact-check from either Correctiv, dpa, Faktenfuchs or AFP. The test data consists of 149 Telegram posts. The performance is as follows.
61
 
62
  | | precision | recall | f1-score | support |
63
- |----------------|-----------|--------|----------|---------|
64
- | **factual** | 0.88 | 0.92 | 0.90 | 71 |
65
- | **non-factual**| 0.92 | 0.88 | 0.90 | 78 |
66
  | | | | | |
67
- | **accuracy** | | | 0.90 | 149 |
68
- | **macro avg** | 0.90 | 0.90 | 0.90 | 149 |
69
- | **weighted avg** | 0.90 | 0.90 | 0.90 | 149 |
 
70
 
71
 
72
 
 
60
  The weak annotation was performed using GPT-4o. The prompt for labeling the data can be found [here](https://huggingface.co/Sami92/XLM-R-Large-ClaimDetection/blob/main/FactualityPrompt_GPT.txt). The data was taken from Telegram. More specifically from a set of about 200 channels that have been subject to a fact-check from either Correctiv, dpa, Faktenfuchs or AFP. The test data consists of 149 Telegram posts. The performance is as follows.
61
 
62
  | | precision | recall | f1-score | support |
63
+ |----------------|:---------:|:------:|:--------:|:-------:|
64
+ | **factual** | 0.88 | 0.92 | 0.90 | 71 |
65
+ | **non-factual**| 0.92 | 0.88 | 0.90 | 78 |
66
  | | | | | |
67
+ | **accuracy** | | | 0.90 | 149 |
68
+ | **macro avg** | 0.90 | 0.90 | 0.90 | 149 |
69
+ | **weighted avg** | 0.90 | 0.90 | 0.90 | 149 |
70
+
71
 
72
 
73