bitwise31337 commited on
Commit
5a28f9f
·
verified ·
1 Parent(s): 817f561

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -5
README.md CHANGED
@@ -1,10 +1,50 @@
1
  ---
 
2
  language:
3
- - en
4
  base_model:
5
- - FacebookAI/roberta-base
6
- license: cc-by-4.0
7
  ---
8
 
9
- Model trained on the MAGPIE dataset introduced in [MAGPIE: Multi-Task Media-Bias Analysis Generalization for Pre-Trained Identification of Expressions
10
- ](https://arxiv.org/abs/2403.07910) paper.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-4.0
3
  language:
4
+ - multilingual
5
  base_model:
6
+ - FacebookAI/xlm-roberta-base
7
+ pipeline_tag: text-classification
8
  ---
9
 
10
+ This is a model pre-trained on LBM (Large Bias Mixture) collection of 59 bias-related tasks in via multi-task learning introduced in [MAGPIE: Multi-Task Media-Bias Analysis Generalization for Pre-Trained Identification of Expressions
11
+ ](https://arxiv.org/abs/2403.07910).
12
+ ---
13
+
14
+ ## Citation
15
+
16
+ **Code repository**: https://github.com/Media-Bias-Group/magpie-multi-task
17
+
18
+ **Paper**: https://aclanthology.org/2024.lrec-main.952/
19
+
20
+
21
+ If you use this model, please cite the following paper(s):
22
+
23
+ ```bibtex
24
+ @inproceedings{horych-etal-2024-magpie,
25
+ title = "{MAGPIE}: Multi-Task Analysis of Media-Bias Generalization with Pre-Trained Identification of Expressions",
26
+ author = "Horych, Tom{\'a}{\v{s}} and
27
+ Wessel, Martin Paul and
28
+ Wahle, Jan Philip and
29
+ Ruas, Terry and
30
+ Wa{\ss}muth, Jerome and
31
+ Greiner-Petter, Andr{\'e} and
32
+ Aizawa, Akiko and
33
+ Gipp, Bela and
34
+ Spinde, Timo",
35
+ editor = "Calzolari, Nicoletta and
36
+ Kan, Min-Yen and
37
+ Hoste, Veronique and
38
+ Lenci, Alessandro and
39
+ Sakti, Sakriani and
40
+ Xue, Nianwen",
41
+ booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
42
+ month = may,
43
+ year = "2024",
44
+ address = "Torino, Italia",
45
+ publisher = "ELRA and ICCL",
46
+ url = "https://aclanthology.org/2024.lrec-main.952",
47
+ pages = "10903--10920",
48
+ abstract = "Media bias detection poses a complex, multifaceted problem traditionally tackled using single-task models and small in-domain datasets, consequently lacking generalizability. To address this, we introduce MAGPIE, a large-scale multi-task pre-training approach explicitly tailored for media bias detection. To enable large-scale pre-training, we construct Large Bias Mixture (LBM), a compilation of 59 bias-related tasks. MAGPIE outperforms previous approaches in media bias detection on the Bias Annotation By Experts (BABE) dataset, with a relative improvement of 3.3{\%} F1-score. Furthermore, using a RoBERTa encoder, we show that MAGPIE needs only 15{\%} of fine-tuning steps compared to single-task approaches. We provide insight into task learning interference and show that sentiment analysis and emotion detection help learning of all other tasks, and scaling the number of tasks leads to the best results. MAGPIE confirms that MTL is a promising approach for addressing media bias detection, enhancing the accuracy and efficiency of existing models. Furthermore, LBM is the first available resource collection focused on media bias MTL.",
49
+ }
50
+ ```