Datasets:
Unnamed: 0 int64 | DI float64 | KL float64 | W float64 | method string |
|---|---|---|---|---|
0 | 0.447046 | 0 | 0 | Grad_reg_num |
1 | 0.5 | null | 0.009512 | Grad_reg_num |
2 | 0.6 | null | 0.027283 | Grad_reg_num |
3 | 0.7 | null | 0.04488 | Grad_reg_num |
4 | 0.8 | null | 0.062406 | Grad_reg_num |
5 | 0.9 | null | 0.079546 | Grad_reg_num |
0 | 0.447046 | 0 | 0 | Replace_SF |
1 | 0.5 | null | 0.005 | Replace_SF |
2 | 0.6 | null | 0.0144 | Replace_SF |
3 | 0.7 | null | 0.0238 | Replace_SF |
4 | 0.8 | null | 0.0332 | Replace_SF |
5 | 0.9 | null | 0.0426 | Replace_SF |
0 | 0.447046 | 0 | 0 | Gems_num |
1 | 0.5 | 0.000663 | 0.027303 | Gems_num |
2 | 0.6 | 0.005311 | 0.078094 | Gems_num |
3 | 0.7 | 0.01408 | 0.127861 | Gems_num |
4 | 0.8 | 0.02681 | 0.176615 | Gems_num |
5 | 0.9 | 0.043645 | 0.224381 | Gems_num |
0 | 0.447046 | 0 | 0 | Gems_mean |
1 | 0.5 | 0.007751 | 0.114897 | Gems_mean |
2 | 0.6 | 0.04791 | 0.311217 | Gems_mean |
3 | 0.7 | 0.103794 | 0.484415 | Gems_mean |
4 | 0.8 | 0.16628 | 0.638368 | Gems_mean |
5 | 0.9 | 0.231104 | 0.776115 | Gems_mean |
0 | 0.447046 | 0 | 0 | Grad_reg_mean |
1 | 0.5 | null | 0.033928 | Grad_reg_mean |
2 | 0.6 | null | 0.09356 | Grad_reg_mean |
3 | 0.7 | null | 0.148606 | Grad_reg_mean |
4 | 0.8 | null | 0.205956 | Grad_reg_mean |
5 | 0.9 | null | 0.259781 | Grad_reg_mean |
0 | 0.447046 | 0 | 0 | Grad_la_mean |
1 | 0.5 | null | 0.03884 | Grad_la_mean |
2 | 0.6 | null | 0.104847 | Grad_la_mean |
3 | 0.7 | null | 0.17247 | Grad_la_mean |
4 | 0.8 | null | 0.234047 | Grad_la_mean |
5 | 0.9 | null | 0.301472 | Grad_la_mean |
0 | 0.447046 | 0 | 0 | Grad_la_num |
1 | 0.5 | null | 0.010503 | Grad_la_num |
2 | 0.6 | null | 0.031311 | Grad_la_num |
3 | 0.7 | null | 0.057952 | Grad_la_num |
4 | 0.8 | null | 0.07974 | Grad_la_num |
5 | 0.9 | null | 0.103329 | Grad_la_num |
0 | 0.447046 | 0 | 0 | Sampling_X_cost |
1 | 0.5 | 0.002102 | 0.005752 | Sampling_X_cost |
2 | 0.6 | 0.007093 | 0.018475 | Sampling_X_cost |
3 | 0.7 | 0.01253 | 0.032839 | Sampling_X_cost |
4 | 0.8 | 0.018268 | 0.047821 | Sampling_X_cost |
5 | 0.9 | 0.023218 | 0.063181 | Sampling_X_cost |
Exposing the Illusion of Fairness (EIF) Manipulations Results
We consider use cases where the auditee has developed a model which has fairness issues. It tries to hide the problem by picking a subsample while optimizing the fairness metric that will be computed by the auditor. Yet, from the supervisory authority, submitting a non-representative sample constitute a deceptive attempt by the auditee to obstruct or distort the assessment.
We present here the original empirical distribution (through the original dataset), and the manipulated distributions. From those manipulated distributions, we can study how to effectively detect the manipulation paradigm.
π Overview
This dataset accompanies the paper:
Exposing the Illusion of Fairness (EIF): Auditing Vulnerabilities to Distributional Manipulation Attacks
https://arxiv.org/abs/2507.20708
HF Models repository: https://huggingface.co/ValentinLAFARGUE/EIF-biased-classifiers
Code repository:
https://github.com/ValentinLafargue/Inspection
It contains preprocessed tabular datasets, model outputs, and fairness-related quantities used to analyze bias and mitigation strategies. For each dataset, the original distribution is to be compared with the manipulated ones.
π Dataset Description
The dataset is organized by benchmark dataset, each containing multiple NumPy arrays:
- Original data
- Model predictions and thresholds
- Fairness metrics (Disparate Impact)
- Gradients and mitigation-related quantities
π Structure
EIF-dataset/
βββ ADULT/
βββ ASC_INC/
βββ ASC_MOB/
βββ ASC_EMP/
βββ ASC_TRA/
βββ ASC_PUC/
βββ BAF/
βββ CelebA/
Each folder contains .npy files such as:
original.npyβ original test dataset features (including S and ΕΆ)DI.npyβ Disparate Impact valuesacc.npyβ model accuracythreshold.npyβ logits decision thresholdMiti_mod_SF.npyβ Distribution manipulation through the Replace (S,ΕΆ) methodMiti_sampling_X.npyβ Distribution manipulation through the Matching (X,S,ΕΆ) methodMiti_Gems_mean.npyβ Distribution manipulation through Gems (Entropic projection), proportional variantMiti_Gems_number.npyβ Distribution manipulation through Gems (Entropic projection), balanced variantGrad_reg_me.npyβ Distribution manipulation through Wasserstein gradient method, proportional variantGrad_reg_nu.npyβ Distribution manipulation through Wasserstein gradient method, balanced variantGrad_la_me.npyβ Distribution manipulation through Wasserstein gradient method, 1D-projection, proportional variantGrad_la_nu.npyβ Distribution manipulation through Wasserstein gradient method, 1D-projection, balanced variant
π Datasets, Sensitive Attributes, and Disparate Impact
The Disparate Impact is the ratio of positive outcome rates between groups. The "groups" are defined, or separated using a so-called sensitive attribute, which is also called in legal texts a protected attribute.
| Dataset | Adult[1] | INC[2] | TRA[2] | MOB[2] | BAF[3] | EMP[2] | PUC[2] |
|---|---|---|---|---|---|---|---|
| Sensitive Attribute (S) | Sex | Sex | Sex | Age | Age | Disability | Disability |
| Disparate Impact (DI) | 0.30 | 0.67 | 0.69 | 0.45 | 0.35 | 0.30 | 0.32 |
[1]: Becker, B. and Kohavi, R. (1996). Adult. UCI Machine Learning Repository. DOI: https://doi.org/10.24432/C5XW20.306,
https://www.kaggle.com/datasets/uciml/adult-census-income.
[2]: Ding, F., Hardt, M., Miller, J., and Schmidt, L. (2021). Retiring adult: New datasets for fair machine learning. In Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W., editors, Advances in Neural Information Processing Systems.313,
https://github.com/socialfoundations/folktables.
[3]: Jesus, S., Pombal, J., Alves, D., Cruz, A., Saleiro, P., Ribeiro, R. P., Gama, J., and Bizarro, P. (2022). Turning the tables: Biased, imbalanced, dynamic tabular datasets for ml evaluation. In Advances in Neural Information Processing Systems,
https://www.kaggle.com/datasets/sgpjesus/bank-account-fraud-dataset-neurips-2022.
π― Intended Use
We propose a methodology that simulate how stakeholders could try to evade an audit on the Disparate Impact ratio, without any liability. The auditee aims to construct a dataset whose distribution is optimally close to the distribution of the original data, while ensuring that the fairness measure is above a threshold, as required by the regulations. We provide here the results of the different manipulation strategies, which we later detect using distribution tests.
- Analyzing auditing framework and vulnerability
- Studying distribution manipulation strategies
- Reproducing EIF experiments
- Benchmarking distribution manipulation strategies
- Benchmarking manipulation-proof strategies through statistical tests
β οΈ Limitations
- Not intended for production use
- Contains synthetic and manipulated data
- Original classification performance is not optimized
βοΈ Ethical Considerations
This work studies how malicious actors could manipulate audit datasets to appear compliant with fairness metrics such as Disparate Impact. Our objective is to expose these vulnerabilities in order to strengthen auditing procedures and regulatory oversight. By analyzing both manipulation strategies and statistical detection methods, we aim to support the development of more robust fairness auditing frameworks.
It should be used for research and analysis only.
π Usage
For a singular file:
path = hf_hub_download(
repo_id="ValentinLAFARGUE/EIF-Manipulated-distributions",
filename="ASC_INC/DI.npy",
repo_type="dataset"
)
arr = np.load(path)
For all files:
from huggingface_hub import list_repo_files
files = list_repo_files("ValentinLAFARGUE/EIF-Manipulated-distributions", repo_type = "dataset")
dic_arr_results = {}
for file in files:
if file[-4:] == '.npy':
file_split = file.split('/')
folder, subfile_name = file_split[0], file_split[1]
path = hf_hub_download(
repo_id="ValentinLAFARGUE/EIF-Manipulated-distributions",
filename= file ,
repo_type="dataset"
)
arr = np.load(path)
try:
dic_arr_results[folder][subfile_name[:-4]] = arr
except:
dic_arr_results[folder] = {}
dic_arr_results[folder][subfile_name[:-4]] = arr
print(f'successfuly loaded {file}')
π Citation
@misc{lafargue2026exposingillusionfairnessauditing,
title={Exposing the Illusion of Fairness: Auditing Vulnerabilities to Distributional Manipulation Attacks},
author={Valentin Lafargue and Adriana Laurindo Monteiro and Emmanuelle Claeys and Laurent Risser and Jean-Michel Loubes},
year={2026},
eprint={2507.20708},
url={https://arxiv.org/abs/2507.20708},
}
- Downloads last month
- 9