Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
jdrechsel commited on
Commit
92e88c1
·
verified ·
1 Parent(s): d35391b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -110,29 +110,29 @@ language:
110
 
111
 
112
 
113
- # GRADIEND Race Data
114
 
115
  <!-- Provide a quick summary of the dataset. -->
116
 
117
- This dataset consists of templated sentences with the masked word being sensitive to race, e.g., *African*.
118
  ```
119
 
120
  ```
121
 
122
- See [GENTER](https://huggingface.co/datasets/aieng-lab/genter) and [GRADIEND Religion Data](https://huggingface.co/datasets/aieng-lab/gradiend_religion_data) for similar datasets.
123
 
124
  ## Usage
125
 
126
- The dataset uses **one subset per class**. Subset names are class identifiers: `white`, `black`, `asian`. Each subset has columns `masked`, `split`, and one column per class (e.g. `white`, `black`, `asian`) giving the token for that class in that row.
127
 
128
  ```python
129
  from datasets import load_dataset
130
 
131
- # Load one subset (one class view), e.g. "white"
132
- ds = load_dataset("aieng-lab/gradiend_race_data", "white", split="train")
133
- # ds has columns: masked, split, white, black, asian
134
- label = ds['white']
135
- alternative_target = ds['black'] # or 'asian'
136
  ```
137
  `split` can be either `train`, `val`, `test`, or `all`.
138
 
@@ -142,9 +142,9 @@ alternative_target = ds['black'] # or 'asian'
142
  ### Dataset Description
143
 
144
  <!-- Provide a longer summary of what this dataset is. -->
145
- This dataset is a filtered version of [Wikipedia-10](https://drive.google.com/file/d/1boQTn44RnHdxWeUKQAlRgQ7xrlQ_Glwo/view?usp=sharing) containing only sentences that contain a race bias sensitive word of the `source_id` race. We used the same bias sensitive words as defined by [Maede et al. (2021)](https://arxiv.org/abs/2110.08527) ([bias attribute words](https://github.com/McGill-NLP/bias-bench/blob/main/data/bias_attribute_words.json)).
146
 
147
- It is stored in per-class form: each subset (e.g. `white`) corresponds to one source class. Rows are identified by (masked, split). For each other class, the corresponding column holds the target token when that class is the counterfactual target (e.g. column `black` in subset `white` is the token used when the target class is `black`).
148
 
149
  ### Dataset Sources
150
 
@@ -161,7 +161,7 @@ It is stored in per-class form: each subset (e.g. `white`) corresponds to one so
161
 
162
  - `text`: the original entry of Wikipedia-10
163
  - `masked`: the masked version of `text` (i.e., contains a `[MASK]` at every occurrence of the subset column)
164
- - `white`/`asian`/`black`: The mask target words for white/asian/black races. Note that the column equal to the subset id is the original value of the `[MASK]` token.
165
 
166
 
167
  ## Dataset Creation
@@ -170,7 +170,7 @@ It is stored in per-class form: each subset (e.g. `white`) corresponds to one so
170
 
171
  <!-- Motivation for the creation of this dataset. -->
172
 
173
- For the training of a race bias [GRADIEND models](https://github.com/aieng-lab/gradiend), a diverse dataset is required to asses model gradients relevant to bias-sensitive information.
174
 
175
  ### Source Data
176
 
@@ -181,7 +181,7 @@ The dataset is derived from [Wikipedia-10](https://drive.google.com/file/d/1boQT
181
 
182
  ### Limitations
183
 
184
- Note that the splitting is performed entirely random. Thus, the same masked text might occur in other splits (in combination with other target words). The same limitation holds across different races.
185
 
186
 
187
  ## Citation
 
110
 
111
 
112
 
113
+ # GRADIEND Religion Data
114
 
115
  <!-- Provide a quick summary of the dataset. -->
116
 
117
+ This dataset consists of templated sentences with the masked word being sensitive to religion, e.g., *Jewish*.
118
  ```
119
 
120
  ```
121
 
122
+ See [GENTER](https://huggingface.co/datasets/aieng-lab/genter) and [GRADIEND Race Data](https://huggingface.co/datasets/aieng-lab/gradiend_race_data) for similar datasets.
123
 
124
  ## Usage
125
 
126
+ The dataset uses **one subset per class**. Subset names are class identifiers: `jewish`, `christian`, `muslim`. Each subset has columns `masked`, `split`, and one column per class (e.g. `christian`, `jewish`, `muslim`) giving the token for that class in that row.
127
 
128
  ```python
129
  from datasets import load_dataset
130
 
131
+ # Load one subset (one class view), e.g. "christian"
132
+ ds = load_dataset("aieng-lab/gradiend_religion_data", "christian", split="train")
133
+ # ds has columns: masked, split, christian, jewish, muslim
134
+ label = ds['christian']
135
+ alternative_target = ds['jewish'] # or 'muslim'
136
  ```
137
  `split` can be either `train`, `val`, `test`, or `all`.
138
 
 
142
  ### Dataset Description
143
 
144
  <!-- Provide a longer summary of what this dataset is. -->
145
+ This dataset is a filtered version of [Wikipedia-10](https://drive.google.com/file/d/1boQTn44RnHdxWeUKQAlRgQ7xrlQ_Glwo/view?usp=sharing) containing only sentences that contain a religion bias sensitive word of the `source_id` religion. We used the same bias sensitive words as defined by [Maede et al. (2021)](https://arxiv.org/abs/2110.08527) ([bias attribute words](https://github.com/McGill-NLP/bias-bench/blob/main/data/bias_attribute_words.json)).
146
 
147
+ It is stored in per-class form: each subset (e.g. `christian`) corresponds to one source class. Rows are identified by (masked, split). For each other class, the corresponding column holds the target token when that class is the counterfactual target (e.g. column `jewish` in subset `christian` is the token used when the target class is `jewish`).
148
 
149
  ### Dataset Sources
150
 
 
161
 
162
  - `text`: the original entry of Wikipedia-10
163
  - `masked`: the masked version of `text` (i.e., contains a `[MASK]` at every occurrence of the subset column)
164
+ - `christian`/`jewish`/`muslim`: The mask target words for christian/jewish/muslim religions. Note that the column equal to the subset id is the original value of the `[MASK]` token.
165
 
166
 
167
  ## Dataset Creation
 
170
 
171
  <!-- Motivation for the creation of this dataset. -->
172
 
173
+ For the training of a religion bias [GRADIEND models](https://github.com/aieng-lab/gradiend), a diverse dataset is required to asses model gradients relevant to bias-sensitive information.
174
 
175
  ### Source Data
176
 
 
181
 
182
  ### Limitations
183
 
184
+ Note that the splitting is performed entirely random. Thus, the same masked text might occur in other splits (in combination with other target words). The same limitation holds across different religions.
185
 
186
 
187
  ## Citation