bsakash commited on
Commit
e4384e6
Β·
verified Β·
1 Parent(s): 469c6c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -51
README.md CHANGED
@@ -1,60 +1,66 @@
1
- license: cc-by-nc-sa-4.0
2
- ---
3
- task_categories:
4
- - text-classification
5
- language:
6
- - en
7
- tags:
8
- - legal
9
- - bias-detection
10
- - robustness
11
- size_categories:
12
- - 10K<n<100K
13
- ---
14
- pretty_name: RobustBiasBench
15
- description: >
16
- RobustBiasBench consists of 18,404 U.S. policy text excerpts manually annotated
17
- for bias type and normative framing. The dataset is designed to evaluate the
18
- robustness of language models in fairness-critical tasks, with both clean and
19
- perturbed versions of the data available.
20
  ---
21
- dataset_format:
22
- file_type: csv
23
- fields:
24
- - id: "Unique numeric ID for each policy excerpt"
25
- - date: "Year of the policy (extracted from the source document)"
26
- - bias_type: "Original fine-grained label (e.g., age, gender, citizenship)"
27
- - normative_framing: "Bias framing: implicit, explicit, or no_bias"
28
- - source: "URL or citation of the original policy document"
29
- - policy: "Text excerpt of the policy (typically 1–3 sentences)"
30
- - bias_type_merged: "Mapped class: one of group_1, group_2, or no_bias"
 
 
 
 
31
 
32
  ---
33
 
34
- label_schema:
35
- bias_type:
36
- group_1: ["economic", "education", "political", "citizenship", "criminal_justice"]
37
- group_2: ["age", "gender", "race/culture", "religion", "disability"]
38
- no_bias: ["procedural", "neutral"]
39
- bias_type_merged:
40
- values: ["group_1", "group_2", "no_bias"]
41
- normative_framing:
42
- values: ["explicit", "implicit", "no_bias"]
 
 
 
 
 
 
 
 
 
 
43
 
44
  ---
45
- class_distribution:
46
- no_bias: 6017
47
- group_1: 6246
48
- group_2: 6141
 
 
 
 
 
 
 
49
  ---
50
- license_details: >
51
- The dataset is released under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
 
 
 
52
  ---
53
 
54
- citation: >
55
- @inproceedings{bonagiri2025robustbiasbench,
56
- title={RobustBiasBench: Stress-Testing LLMs for Bias Detection with Textual Perturbations},
57
- author={Bonagiri, Akash and [Co-authors]},
58
- booktitle={NeurIPS Dataset Track},
59
- year={2025}
60
- }
 
1
+
2
+ # RobustBiasBench Dataset Description
3
+
4
+ This document provides an overview of the features and labels included in the **RobustBiasBench** dataset, which consists of over 18,000 policy excerpts annotated for bias type and normative framing.
5
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ---
7
+
8
+ ## πŸ“„ Dataset Format
9
+
10
+ The dataset is stored in CSV format and contains the following columns:
11
+
12
+ | Column Name | Description |
13
+ |---------------------|-------------|
14
+ | `id` | Unique numeric ID for each policy excerpt |
15
+ | `date` | Year of the policy (extracted from the source document) |
16
+ | `bias_type` | Fine-grained original bias label assigned by annotators (e.g., `age`, `gender`, `citizenship`) |
17
+ | `normative_framing`| Whether the bias is presented implicitly or explicitly (`implicit`, `explicit`, or `no_bias`) |
18
+ | `source` | URL or citation source for the original policy document |
19
+ | `policy` | Text excerpt of the policy (typically 1–3 sentences) |
20
+ | `bias_type_merged` | Mapped bias class used for evaluation: one of `group_1`, `group_2`, or `no_bias` |
21
 
22
  ---
23
 
24
+ ## 🏷️ Label Description
25
+
26
+ ### `bias_type`
27
+ Original annotation capturing specific bias domains:
28
+ - `age`, `gender`, `race/culture`, `religion`, `disability` β†’ Group 2 (Demographic Bias)
29
+ - `economic`, `education`, `political`, `citizenship`, `criminal_justice` β†’ Group 1 (Systemic Bias)
30
+ - `no_bias` β†’ Procedural or neutral statements
31
+
32
+ ### `bias_type_merged`
33
+ Mapped classes used for modeling:
34
+ - `group_1`: Systemic/institutional bias
35
+ - `group_2`: Demographic/identity-based bias
36
+ - `no_bias`: Factual or operational content
37
+
38
+ ### `normative_framing`
39
+ Captures how bias is framed:
40
+ - `explicit`: Bias is directly stated (e.g., "women are not eligible")
41
+ - `implicit`: Bias is implied through structure or condition (e.g., "must be a citizen to apply")
42
+ - `no_bias`: Used only when `bias_type = no_bias`
43
 
44
  ---
45
+
46
+ ## πŸ“Š Class Distribution
47
+
48
+ The dataset includes the following number of annotated examples:
49
+
50
+ - **No Bias**: 6,017 examples
51
+ - **Systemic Bias (group_1)**: 6,246 examples
52
+ - **Demographic Bias (group_2)**: 6,141 examples
53
+
54
+ This balance ensures that the model does not overfit to any single category and can learn to differentiate across nuanced cases of policy bias.
55
+
56
  ---
57
+
58
+ ## πŸ”’ License
59
+
60
+ This dataset is released under the **CC BY-NC-SA 4.0 License** and is intended for academic use in fairness and robustness research.
61
+
62
  ---
63
 
64
+ ## πŸ“¬ Contact
65
+
66
+ For questions or contributions, please contact [yourname@yourinstitution.edu].