add performance table
Browse files
README.md
CHANGED
|
@@ -8,3 +8,27 @@ pipeline_tag: text-classification
|
|
| 8 |
# PopBERT
|
| 9 |
|
| 10 |
inital commit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
# PopBERT
|
| 9 |
|
| 10 |
inital commit
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
# Performance
|
| 14 |
+
|
| 15 |
+
This table presents the classification report for a 5-fold
|
| 16 |
+
cross-validation of our model. The hyperparameters are consistent
|
| 17 |
+
across all 5 runs. The final and published model was then trained on
|
| 18 |
+
all data with the same hyperparameters. It is evident that the model
|
| 19 |
+
performs, on average, best for anti-elitism but performs the worst for
|
| 20 |
+
the detection of right-wing host ideology. The relatively small
|
| 21 |
+
standard deviations suggest that the split into training and test data
|
| 22 |
+
has minimal impact on model performance. Therefore, it is expected
|
| 23 |
+
that the performance of the final model will be comparable to what is
|
| 24 |
+
depicted here.
|
| 25 |
+
|
| 26 |
+
| Dimension | Precision | Recall | F1 |
|
| 27 |
+
|---------------------|---------------|---------------|---------------|
|
| 28 |
+
| Anti-Elitism | 0.812 (0.013) | 0.885 (0.006) | 0.847 (0.007) |
|
| 29 |
+
| People-Centrism | 0.670 (0.011) | 0.725 (0.040) | 0.696 (0.019) |
|
| 30 |
+
| Left-Wing Ideology | 0.664 (0.023) | 0.771 (0.024) | 0.713 (0.010) |
|
| 31 |
+
| Right-Wing Ideology | 0.654 (0.029) | 0.698 (0.050) | 0.674 (0.031) |
|
| 32 |
+
| | | | |
|
| 33 |
+
| micro avg | 0.732 (0.009) | 0.805 (0.006) | 0.767 (0.007) |
|
| 34 |
+
| macro avg | 0.700 (0.011) | 0.770 (0.010) | 0.733 (0.010) |
|