Datasets:
xuzihao112
/

Modalities:
Tabular
Formats:
csv
ArXiv:
License:
xuzihao112 mvladimirova commited on
Commit
ec1d1f2
·
0 Parent(s):

Duplicate from criteo/FairJob

Browse files

Co-authored-by: Mariia Vladimirova <mvladimirova@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +55 -0
  2. README.md +117 -0
  3. fairjob.csv.gz +3 -0
.gitattributes ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - tabular-classification
5
+ pretty_name: fairjob
6
+ size_categories:
7
+ - 1M<n<10M
8
+ ---
9
+ # FairJob: A Real-World Dataset for Fairness in Online Systems
10
+
11
+ ## Summary
12
+
13
+ This dataset is released by Criteo to foster research and innovation on Fairness in Advertising and AI systems in general.
14
+ See also [Criteo pledge for Fairness in Advertising](https://fr.linkedin.com/posts/diarmuid-gill_advertisingfairness-activity-6945003669964660736-_7Mu).
15
+
16
+ The dataset is intended to learn click predictions models and evaluate by how much their predictions are biased between different gender groups.
17
+ The associated paper is available at [Vladimirova et al. 2024](https://arxiv.org/pdf/2407.03059).
18
+
19
+ ## License
20
+
21
+ The data is released under the [CC-BY-NC-SA](https://creativecommons.org/licenses/by-nc-sa/4.0/) 4.0 license.
22
+ You are free to Share and Adapt this data provided that you respect the Attribution, NonCommercial and ShareAlike conditions.
23
+ Please read carefully the full license before using.
24
+
25
+ ## Data description
26
+ The dataset contains pseudononymized users' context and publisher features that was collected from a job targeting campaign ran for 5 months by Criteo AdTech company. Each line represents a product that was shown to a user. Each user has an impression session where they can see several products at the same time. Each product can be clicked or not clicked by the user. The dataset consists of 1072226 rows and 55 columns.
27
+
28
+ - features
29
+ - `user_id` is a unique identifier assigned to each user. This identifier has been anonymized and does not contain any information related to the real users.
30
+ - `product_id` is a unique identifier assigned to each product, i.e. job offer.
31
+ - `impression_id` is a unique identifier assigned to each impression, i.e. online session that can have several products at the same time.
32
+ - `cat0` to `cat5` are anonymized categorical user features.
33
+ - `cat6` to `cat12` are anonymized categorical product features.
34
+ - `num13` to `num47` are anonymized numerical user features.
35
+ - labels
36
+ - `protected_attribute` is a binary feature that describes user gender proxy, i.e. female is 0, male is 1. The detailed description on the meaning can be found below.
37
+ - `senior` is a binary feature that describes the seniority of the job position, i.e. an assistant role is 0, a managerial role is 1. This feature was created during data processing step from the product title feature: if the product title contains words describing managerial role (e.g. 'president', 'ceo', and others), it is assigned to 1, otherwise to 0.
38
+ - `rank` is a numerical feature that corresponds to the positional rank of the product on the display for given `impression_id`. Usually, the position on the display creates the bias with respect to the click: lower rank means higher position of the product on the display.
39
+ - `displayrandom` is a binary feature that equals 1 if the display position on the banner of the products associated with the same `impression_id` was randomized. The click-rank metric should be computed on `displayrandom` = 1 to avoid positional bias.
40
+ - `click` is a binary feature that equals 1 if the product `product_id` in the impression `impression_id` was clicked by the user `user_id`.
41
+
42
+
43
+ ### Data statistics
44
+
45
+ | dimension | average |
46
+ |---------------------|---------|
47
+ | click | 0.077 |
48
+ | protected attribute | 0.500 |
49
+ | senior | 0.704 |
50
+
51
+ ### Protected attribute
52
+
53
+ As Criteo does not have access to user demographics we report a proxy of gender as protected attribute.
54
+ This proxy is reported as binary for simplicity yet we acknowledge gender is not necessarily binary.
55
+
56
+ The value of the proxy is computed as the majority of gender attributes of products seen in the user timeline.
57
+ Product having a gender attribute are typically fashion and clothing.
58
+ We acknowledge that this proxy does not necessarily represent how users relate to a given gender yet we believe it to be a realistic approximation for research purposes.
59
+
60
+ We encourage research in Fairness defined with respect to other attributes as well.
61
+
62
+
63
+ ### Limitations and interpretations
64
+
65
+ We remark that the proposed gender proxy does not give a definition of the gender.
66
+ Since we do not have access to the sensitive information, this is the best solution we have identified at this stage to idenitify bias on pseudonymised data, and we encourage any discussion on better approximations.
67
+ This proxy is reported as binary for simplicity yet we acknowledge gender is not necessarily binary. Although our research focuses on gender, this should not diminish the importance of investigating other types of algorithmic discrimination.
68
+ While this dataset provides important application of fairness-aware algorithms in a high-risk domain, there are several fundamental limitation that can not be addressed easily through data collection or curation processes.
69
+ These limitations include historical bias that affect a positive outcome for a given user, as well as the impossibility to verify how close the gender-proxy is to the real gender value.
70
+ Additionally, there might be bias due to the market unfairness.
71
+ Such limitations and possible ethical concerns about the task should be taken into account while drawing conclusions from the research using this dataset.
72
+ Readers should not interpret summary statistics of this dataset as ground truth but rather as characteristics of the dataset only.
73
+
74
+ ## Challenges
75
+
76
+ The first challenge comes from handling the different types of data that are common in tables, the mixed-type columns: there are both numerical and categorical features that have to be embedded.
77
+ In addition, some of the features have long-tail phenomenon and products have popularity bias. Our datasets contains more than 1,000,000 lines, while current high-performing models are under-explored in scale.
78
+ Additional challenge comes from strongly imbalanced data: the positive class proportion in our data is less than 0.007 that leads to challenges in training robust and fair machine learning models.
79
+ In our dataset there is no significant imbalances in demographic groups users regarding the protected attribute (both genders are sub-sampled with 0.5 proportion, female profile users were shown less job ad with 0.4 proportion and slightly less senior position jobs with 0.48 proportion), however, there could be a hidden effect of a selection bias.
80
+ This poses a problem in accurately assessing model performance.
81
+ More detailed statistics and exploratory analysis are referred to the supplemental material of the associated paper linked below.
82
+
83
+ ## Metrics
84
+
85
+ We strongly recommend to measure prediction quality using Negative Log-likelihood (lower is better).
86
+
87
+ We recommend to measure Fairness of ads by Demographic Parity conditioned on Senior job offers:
88
+
89
+ $$ E[f(x) | protected\_attribute=1, senior=1] - E[f(x) | protected\_attribute=0, senior=1] $$
90
+
91
+ This corresponds to the average difference in predictions for senior job opportunities between the two gender groups (lower is better).
92
+ Intuitively, when this metric is low it means we are not biased towards presenting more senior job opportunities (e.g. Director of XXX) to one gender vs the other.
93
+
94
+ ## Example
95
+
96
+ You can start by running the example in `example.py` (requires numpy + torch).
97
+ This implements
98
+ - a dummy classifier (totally fair yet not very useful)
99
+ - a logistic regression with embeddings for categorical features (largely unfair and useful)
100
+ - a "fair" logistic regression (relatively fair and useful)
101
+
102
+ The "fair" logistic regression is based on the method proposed by [Bechavod et al. 2017](https://arxiv.org/abs/1707.00044).
103
+
104
+ ## Citation
105
+
106
+ If you use the dataset in your research please cite it using the following Bibtex excerpt:
107
+
108
+ ```
109
+ @misc{vladimirova2024fairjob,
110
+ title={FairJob: A Real-World Dataset for Fairness in Online Systems},
111
+ author={Mariia Vladimirova and Federico Pavone and Eustache Diemert},
112
+ year={2024},
113
+ eprint={2407.03059},
114
+ archivePrefix={arXiv},
115
+ url={https://arxiv.org/abs/2407.03059},
116
+ }
117
+ ```
fairjob.csv.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:defeacb5233e26edd4679e2097d3582bc7858620ae301528675f78131a35c18c
3
+ size 190748253