nayohan commited on
Commit
4e3b7fa
·
verified ·
1 Parent(s): e8418d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md CHANGED
@@ -26,4 +26,27 @@ configs:
26
  data_files:
27
  - split: train
28
  path: data/train-*
 
 
 
 
 
29
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
  data_files:
27
  - split: train
28
  path: data/train-*
29
+ license: cc-by-sa-4.0
30
+ language:
31
+ - ko
32
+ tags:
33
+ - safety
34
  ---
35
+
36
+ reference: [https://github.com/jason9693/APEACH](https://github.com/jason9693/APEACH)
37
+ ```
38
+ @inproceedings{yang-etal-2022-apeach,
39
+ title = "{APEACH}: Attacking Pejorative Expressions with Analysis on Crowd-Generated Hate Speech Evaluation Datasets",
40
+ author = "Yang, Kichang and
41
+ Jang, Wonjun and
42
+ Cho, Won Ik",
43
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
44
+ month = dec,
45
+ year = "2022",
46
+ address = "Abu Dhabi, United Arab Emirates",
47
+ publisher = "Association for Computational Linguistics",
48
+ url = "https://aclanthology.org/2022.findings-emnlp.525",
49
+ pages = "7076--7086",
50
+ abstract = "In hate speech detection, developing training and evaluation datasets across various domains is the critical issue. Whereas, major approaches crawl social media texts and hire crowd-workers to annotate the data. Following this convention often restricts the scope of pejorative expressions to a single domain lacking generalization. Sometimes domain overlap between training corpus and evaluation set overestimate the prediction performance when pretraining language models on low-data language. To alleviate these problems in Korean, we propose APEACH that asks unspecified users to generate hate speech examples followed by minimal post-labeling. We find that APEACH can collect useful datasets that are less sensitive to the lexical overlaps between the pretraining corpus and the evaluation set, thereby properly measuring the model performance.",
51
+ }
52
+ ```