ValentinLAFARGUE commited on
Commit
0073971
Β·
verified Β·
1 Parent(s): e74a66f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +138 -3
README.md CHANGED
@@ -1,3 +1,138 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - fairness
5
+ - classification
6
+ metrics:
7
+ - accuracy
8
+ ---
9
+
10
+ # EIF Biased Classifiers (Multi-Dataset Benchmark)
11
+
12
+ ## πŸ“Œ Overview
13
+
14
+ This repository contains a collection of neural network models trained on seven tabular datasets for the study:
15
+
16
+ **Exposing the Illusion of Fairness (EIF)**
17
+ https://arxiv.org/abs/2507.20708
18
+
19
+ Codebase:
20
+ https://github.com/ValentinLafargue/Inspection
21
+
22
+ Each model corresponds to a specific dataset and is designed to analyze fairness properties rather than maximize predictive performance.
23
+
24
+ ## 🧠 Model Description
25
+
26
+ All models are **multilayer perceptrons (MLPs)** trained on tabular data.
27
+
28
+ - Fully connected neural networks
29
+ - Hidden layers: configurable (`n_loop`, `n_nodes`)
30
+ - Activation: ReLU (optional)
31
+ - Output: Sigmoid
32
+ - Prediction: $\hat{Y} \in [0,1]$
33
+
34
+ ## πŸ“Š Datasets, Sensitive Attributes, and Disparate Impact
35
+
36
+ | Dataset | Adult[1] | INC[2] | TRA[2] | MOB[2] | BAF[3] | EMP[2] | PUC[2] |
37
+ |--------|------|-----|-----|-----|-----|-----|-----|
38
+ | **Sensitive Attribute (S)** | Sex | Sex | Sex | Age | Age | Disability | Disability |
39
+ | **Disparate Impact (DI)** | 0.30 | 0.67 | 0.69 | 0.45 | 0.35 | 0.30 | 0.32 |
40
+ ```
41
+ [1]: Becker, B. and Kohavi, R. (1996). Adult. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5XW20.306,
42
+ https://www.kaggle.com/datasets/uciml/adult-census-income.
43
+
44
+ [2]: Ding, F., Hardt, M., Miller, J., and Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W., editors, Advances in Neural Information Processing Systems.313,
45
+ https://github.com/socialfoundations/folktables.
46
+
47
+ [3]: Jesus, S., Pombal, J., Alves, D., Cruz, A., Saleiro, P., Ribeiro, R. P., Gama, J., and Bizarro, P. (2022). Turning the tables: Biased, imbalanced, dynamic tabular datasets for ml evaluation. In Advances in Neural Information Processing Systems,
48
+ https://www.kaggle.com/datasets/sgpjesus/bank-account-fraud-dataset-neurips-2022.
49
+ ```
50
+
51
+ ### Notes
52
+ - Adult dataset: 5,000 test samples
53
+ - Other datasets: 20,000 test samples
54
+ - Sensitive attributes are used for fairness evaluation
55
+
56
+ ## πŸ“ˆ Predictive Performance (Accuracy)
57
+
58
+ | Dataset | Accuracy |
59
+ |--------|----------|
60
+ | Adult Census Income | 84% |
61
+ | Folktables Income (INC) | 88% |
62
+ | Folktables Mobility (MOB) | 84% |
63
+ | Folktables Employment (EMP) | 77% |
64
+ | Folktables Travel Time (TRA) | 72% |
65
+ | Folktables Public Coverage (PUC) | 73% |
66
+ | Bank Account Fraud (BAF) | 98% |
67
+
68
+ **Note:** High performance on BAF is due to strong class imbalance.
69
+ Accuracy was **not the main objective** of this study.
70
+
71
+ ## 🎯 Intended Use
72
+
73
+ These models are intended for:
74
+
75
+ - Fairness analysis
76
+ - Studying disparate impact and bias
77
+ - Reproducing results from the EIF paper
78
+ - Benchmarking fairness-aware methods
79
+
80
+ ## ⚠️ Limitations and Non-Intended Use
81
+
82
+ - Not designed for production
83
+ - Not optimized for predictive performance
84
+ - Should not be used for real-world decision-making
85
+
86
+ These models intentionally expose biases in standard ML pipelines.
87
+
88
+ ## βš–οΈ Ethical Considerations
89
+
90
+ This work highlights:
91
+ - The presence of bias in machine learning models
92
+ - The limitations of fairness metrics
93
+
94
+ Models should be interpreted as **analytical tools**, not fair systems.
95
+
96
+ ## πŸ“¦ Repository Structure
97
+
98
+ Each dataset corresponds to a subfolder:
99
+
100
+ EIF-biased-classifier/ <br/>
101
+ β”œβ”€β”€ ASC_ADULT_model/<br/>
102
+ β”œβ”€β”€ ASC_INC_model/<br/>
103
+ β”œβ”€β”€ ASC_MOB_model/<br/>
104
+ β”œβ”€β”€ ASC_EMP_model/<br/>
105
+ β”œβ”€β”€ ASC_TRA_model/<br/>
106
+ β”œβ”€β”€ ASC_PUC_model/<br/>
107
+ └── ASC_BAF_model/<br/>
108
+
109
+ Each folder contains:
110
+ - `config.json`
111
+ - `model.safetensors`
112
+
113
+ ## πŸš€ Usage
114
+
115
+ ```python
116
+ model = Network.from_pretrained(
117
+ "ValentinLAFARGUE/EIF-biased-classifier",
118
+ subfolder="ASC_INC_model"
119
+ )
120
+ ```
121
+
122
+ ## πŸ“š Citation
123
+
124
+ ```
125
+ @misc{lafargue2026exposingillusionfairnessauditing,
126
+ title={Exposing the Illusion of Fairness: Auditing Vulnerabilities to Distributional Manipulation Attacks},
127
+ author={Valentin Lafargue and Adriana Laurindo Monteiro and Emmanuelle Claeys and Laurent Risser and Jean-Michel Loubes},
128
+ year={2026},
129
+ eprint={2507.20708},
130
+ url={https://arxiv.org/abs/2507.20708},
131
+ }
132
+ ```
133
+
134
+ ## πŸ” Additional Notes
135
+
136
+ - Models are intentionally simple to isolate fairness behavior
137
+ - Results depend on preprocessing and sampling choices
138
+ - Focus is on reproducibility