rbroc commited on
Commit
ecad05e
·
1 Parent(s): 7d7039b

add dataset

Browse files
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - text-classification
5
+ language:
6
+ - ar
7
+ - pt
8
+ - en
9
+ - fr
10
+ - it
11
+ - zh
12
+ - es
13
+ - nl
14
+ - hi
15
+ - de
16
+ size_categories:
17
+ - 10K<n<100K
18
+ ---
19
+
20
+ #### Description
21
+ Combines multilingual HateCheck datasets (10 languages, including English), by Paul Roettger and colleagues (2021, 2022).
22
+
23
+ The original English dataset can be found under https://github.com/Paul/hatecheck.
24
+ Datasets for other languages are found at:
25
+ - https://github.com/Paul/hatecheck-arabic
26
+ - https://github.com/Paul/hatecheck-mandarin
27
+ - https://github.com/Paul/hatecheck-german
28
+ - https://github.com/Paul/hatecheck-french
29
+ - https://github.com/Paul/hatecheck-hindi
30
+ - https://github.com/Paul/hatecheck-italian
31
+ - https://github.com/Paul/hatecheck-dutch
32
+ - https://github.com/Paul/hatecheck-portuguese
33
+ - https://github.com/Paul/hatecheck-spanish
34
+ Make sure to credit the authors and cite relevant papers (see citation below) if you use these datasets.
35
+
36
+
37
+ #### Bibtex citation
38
+ ```
39
+ @inproceedings{rottger-etal-2021-hatecheck,
40
+ title = "{H}ate{C}heck: Functional Tests for Hate Speech Detection Models",
41
+ author = {R{\"o}ttger, Paul and
42
+ Vidgen, Bertie and
43
+ Nguyen, Dong and
44
+ Waseem, Zeerak and
45
+ Margetts, Helen and
46
+ Pierrehumbert, Janet},
47
+ editor = "Zong, Chengqing and
48
+ Xia, Fei and
49
+ Li, Wenjie and
50
+ Navigli, Roberto",
51
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
52
+ month = aug,
53
+ year = "2021",
54
+ address = "Online",
55
+ publisher = "Association for Computational Linguistics",
56
+ url = "https://aclanthology.org/2021.acl-long.4",
57
+ doi = "10.18653/v1/2021.acl-long.4",
58
+ pages = "41--58",
59
+ abstract = "Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.",
60
+ }
61
+
62
+ @inproceedings{rottger-etal-2022-multilingual,
63
+ title = "Multilingual {H}ate{C}heck: Functional Tests for Multilingual Hate Speech Detection Models",
64
+ author = {R{\"o}ttger, Paul and
65
+ Seelawi, Haitham and
66
+ Nozza, Debora and
67
+ Talat, Zeerak and
68
+ Vidgen, Bertie},
69
+ editor = "Narang, Kanika and
70
+ Mostafazadeh Davani, Aida and
71
+ Mathias, Lambert and
72
+ Vidgen, Bertie and
73
+ Talat, Zeerak",
74
+ booktitle = "Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)",
75
+ month = jul,
76
+ year = "2022",
77
+ address = "Seattle, Washington (Hybrid)",
78
+ publisher = "Association for Computational Linguistics",
79
+ url = "https://aclanthology.org/2022.woah-1.15",
80
+ doi = "10.18653/v1/2022.woah-1.15",
81
+ pages = "154--169",
82
+ abstract = "Hate speech detection models are typically evaluated on held-out test sets. However, this risks painting an incomplete and potentially misleading picture of model performance because of increasingly well-documented systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, recent research has thus introduced functional tests for hate speech detection models. However, these tests currently only exist for English-language content, which means that they cannot support the development of more effective models in other languages spoken by billions across the world. To help address this issue, we introduce Multilingual HateCheck (MHC), a suite of functional tests for multilingual hate speech detection models. MHC covers 34 functionalities across ten languages, which is more languages than any other hate speech dataset. To illustrate MHC{'}s utility, we train and test a high-performing multilingual hate speech detection model, and reveal critical model weaknesses for monolingual and cross-lingual applications.",
83
+ }
84
+ ```
ara/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
cmn/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
deu/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
eng/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
fra/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
hin/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
ita/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
multi-hatecheck.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import datasets
3
+
4
+ import json
5
+ import datasets
6
+
7
+
8
+
9
+ _DESCRIPTION = """\
10
+ Combines multilingual HateCheck datasets (10 languages, including English), by Paul Roettger.
11
+ The original English dataset can be found under https://github.com/Paul/hatecheck.
12
+ Datasets for other languages are found at:
13
+ - https://github.com/Paul/hatecheck-arabic
14
+ - https://github.com/Paul/hatecheck-mandarin
15
+ - https://github.com/Paul/hatecheck-german
16
+ - https://github.com/Paul/hatecheck-french
17
+ - https://github.com/Paul/hatecheck-hindi
18
+ - https://github.com/Paul/hatecheck-italian
19
+ - https://github.com/Paul/hatecheck-dutch
20
+ - https://github.com/Paul/hatecheck-portuguese
21
+ - https://github.com/Paul/hatecheck-spanish
22
+ Make sure to credit the authors and cite relevant papers if you use these datasets.
23
+ """
24
+ _CITATION = """\
25
+ @inproceedings{rottger-etal-2021-hatecheck,
26
+ title = "{H}ate{C}heck: Functional Tests for Hate Speech Detection Models",
27
+ author = {R{\"o}ttger, Paul and
28
+ Vidgen, Bertie and
29
+ Nguyen, Dong and
30
+ Waseem, Zeerak and
31
+ Margetts, Helen and
32
+ Pierrehumbert, Janet},
33
+ editor = "Zong, Chengqing and
34
+ Xia, Fei and
35
+ Li, Wenjie and
36
+ Navigli, Roberto",
37
+ booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
38
+ month = aug,
39
+ year = "2021",
40
+ address = "Online",
41
+ publisher = "Association for Computational Linguistics",
42
+ url = "https://aclanthology.org/2021.acl-long.4",
43
+ doi = "10.18653/v1/2021.acl-long.4",
44
+ pages = "41--58",
45
+ abstract = "Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck{'}s utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.",
46
+ }
47
+
48
+ @inproceedings{rottger-etal-2022-multilingual,
49
+ title = "Multilingual {H}ate{C}heck: Functional Tests for Multilingual Hate Speech Detection Models",
50
+ author = {R{\"o}ttger, Paul and
51
+ Seelawi, Haitham and
52
+ Nozza, Debora and
53
+ Talat, Zeerak and
54
+ Vidgen, Bertie},
55
+ editor = "Narang, Kanika and
56
+ Mostafazadeh Davani, Aida and
57
+ Mathias, Lambert and
58
+ Vidgen, Bertie and
59
+ Talat, Zeerak",
60
+ booktitle = "Proceedings of the Sixth Workshop on Online Abuse and Harms (WOAH)",
61
+ month = jul,
62
+ year = "2022",
63
+ address = "Seattle, Washington (Hybrid)",
64
+ publisher = "Association for Computational Linguistics",
65
+ url = "https://aclanthology.org/2022.woah-1.15",
66
+ doi = "10.18653/v1/2022.woah-1.15",
67
+ pages = "154--169",
68
+ abstract = "Hate speech detection models are typically evaluated on held-out test sets. However, this risks painting an incomplete and potentially misleading picture of model performance because of increasingly well-documented systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, recent research has thus introduced functional tests for hate speech detection models. However, these tests currently only exist for English-language content, which means that they cannot support the development of more effective models in other languages spoken by billions across the world. To help address this issue, we introduce Multilingual HateCheck (MHC), a suite of functional tests for multilingual hate speech detection models. MHC covers 34 functionalities across ten languages, which is more languages than any other hate speech dataset. To illustrate MHC{'}s utility, we train and test a high-performing multilingual hate speech detection model, and reveal critical model weaknesses for monolingual and cross-lingual applications.",
69
+ }
70
+ """
71
+
72
+ _LICENSE = "Original datasets are released under cc-by-4.0."
73
+
74
+
75
+ _LANGUAGES = {"ara": "Arabic",
76
+ "cmn": "Mandarin",
77
+ "eng": "English",
78
+ "deu": "German",
79
+ "fra": "French",
80
+ "hin": "Hindi",
81
+ "ita": "Italian",
82
+ "nld": "Dutch",
83
+ "por": "Portuguese",
84
+ "spa": "Spanish"}
85
+
86
+ _ALL_LANGUAGES = "all_languages"
87
+ _DOWNLOAD_URL = "{lang}/{split}.jsonl"
88
+ _VERSION = "2.18.0"
89
+
90
+
91
+ class MultiHatecheckConfig(datasets.BuilderConfig):
92
+ """BuilderConfig for HatecheckMulti"""
93
+
94
+ def __init__(self, languages=None, **kwargs):
95
+ super(MultiHatecheckConfig, self).__init__(version=datasets.Version(_VERSION, ""), **kwargs),
96
+ self.languages = languages
97
+
98
+
99
+ class MultiHatecheck(datasets.GeneratorBasedBuilder):
100
+
101
+ """Multilingual Hatecheck Corpus, by Paul Roettger"""
102
+
103
+ BUILDER_CONFIGS = [
104
+ MultiHatecheckConfig(
105
+ name=_ALL_LANGUAGES,
106
+ languages=_LANGUAGES,
107
+ description="Hate speech detection dataset with binary (hateful vs non-hateful) labels. Includes 25+ distinct types of hate and challenging non-hate.",
108
+ )
109
+ ] + [
110
+ MultiHatecheckConfig(
111
+ name=lang,
112
+ languages=[lang],
113
+ description=f"{_LANGUAGES[lang]} examples of hate speech, with binary (hateful vs non-hateful) labels. Includes 25+ distinct types of hate and challenging non-hate.",
114
+ )
115
+ for lang in _LANGUAGES
116
+ ]
117
+ BUILDER_CONFIG_CLASS = MultiHatecheckConfig
118
+ DEFAULT_CONFIG_NAME = _ALL_LANGUAGES
119
+
120
+ def _info(self):
121
+ return datasets.DatasetInfo(
122
+ description=_DESCRIPTION,
123
+ features=datasets.Features(
124
+ {
125
+ "text": datasets.Value("string"),
126
+ "is_hateful": datasets.Value("string"),
127
+ "functionality": datasets.Value("string"),
128
+ }
129
+ ),
130
+ supervised_keys=None,
131
+ license=_LICENSE,
132
+ citation=_CITATION,
133
+ )
134
+
135
+ def _split_generators(self, dl_manager):
136
+ test_urls = [_DOWNLOAD_URL.format(split="test", lang=lang) for lang in self.config.languages]
137
+ test_paths = dl_manager.download_and_extract(test_urls)
138
+
139
+ return [
140
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"file_paths": test_paths}),
141
+ ]
142
+
143
+ def _generate_examples(self, file_paths):
144
+ row_count = 0
145
+ for file_path in file_paths:
146
+ with open(file_path, "r", encoding="utf-8") as f:
147
+ for line in f:
148
+ yield row_count, json.loads(line)
149
+ row_count += 1
nld/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
por/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
scripts/make_dataset.py ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from __future__ import annotations
2
+
3
+ from pathlib import Path
4
+ import numpy as np
5
+
6
+ import datasets
7
+
8
+
9
+ _HF_AFFIX = {
10
+ "ara": "arabic",
11
+ "cmn": "mandarin",
12
+ "eng": "",
13
+ "deu": "german",
14
+ "fra": "french",
15
+ "hin": "hindi",
16
+ "ita": "italian",
17
+ "nld": "dutch",
18
+ "por": "portuguese",
19
+ "spa": "spanish",
20
+ }
21
+
22
+ _HF_AFFIX_REV = {v:k for k,v in _HF_AFFIX.items()}
23
+
24
+ _REVISION_DICT = {
25
+ "ara": "65eb7455a05cb77b3ae0c69d444569a8eee54628",
26
+ "cmn": "617d3e9fccd186277297cc305f6588af7384b008",
27
+ "eng": "9d2ac89df04254e5c427bcc8d61b6d6c83a1f59b",
28
+ "deu": "5229a5cc475f36c08d03ca52f0ccb005705e60d2",
29
+ "fra": "5d3085f2129139abc10d2b58becd4d4f2978e5d5",
30
+ "hin": "e9e68e1a4db04726b9278192377049d0f9693012",
31
+ "ita": "21e3d5c827cb60619a89988b24979850a7af85a5",
32
+ "nld": "d622427417d37a8d74e110e6289bc29af4ba4056",
33
+ "por": "323bdf67e0fbd3d7f8086fad0971b5bd5a62524b",
34
+ "spa": "a7ea759535bb9fad6361cca151cf94a46e88edf3",
35
+ }
36
+
37
+ def _transform(dataset):
38
+ target_cols = ["test_case", "label_gold"]
39
+ new_cols = ['text', 'is_hateful']
40
+ rename_dict = dict(zip(target_cols, ["text", "is_hateful"]))
41
+ dataset = dataset.rename_columns(rename_dict)
42
+ keep_cols = new_cols + ["functionality"]
43
+ remove_cols = [col for col in dataset["test"].column_names if col not in keep_cols]
44
+ dataset = dataset.remove_columns(remove_cols)
45
+ return dataset
46
+
47
+
48
+ def make_dataset():
49
+ """
50
+ Load dataset from HuggingFace hub
51
+ """
52
+ ds = {}
53
+ for lang in _HF_AFFIX.values():
54
+ lcode = _HF_AFFIX_REV[lang]
55
+ path = f'Paul/hatecheck-{lang}'.rstrip('-')
56
+ dataset = datasets.load_dataset(
57
+ path=path, revision=_REVISION_DICT[lcode]
58
+ )
59
+ dataset = _transform(dataset)
60
+ out_path = Path('..') / lcode / 'test.jsonl'
61
+ n_rows = dataset['test'].num_rows
62
+ dataset['test'] = dataset['test'].add_column("lang", [lcode]*n_rows)
63
+ dataset['test'].to_json(out_path)
64
+ ds[lcode] = dataset
65
+ return ds
66
+
67
+
68
+ if __name__ == '__main__':
69
+ dataset = make_dataset()
70
+ AVG_CHAR = 0
71
+ for lang in _HF_AFFIX:
72
+ AVG_CHAR += np.mean([len(x['text']) for x in dataset[lang]['test']])
73
+ print(f'avg char: {AVG_CHAR / len(_HF_AFFIX)}')
74
+
spa/test.jsonl ADDED
The diff for this file is too large to render. See raw diff