mishig HF Staff commited on
Commit
e3b3754
·
verified ·
1 Parent(s): 9f9c7ea

Add 1 files

Browse files
Files changed (1) hide show
  1. 2312/2312.10065.md +134 -0
2312/2312.10065.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models
2
+
3
+ URL Source: https://arxiv.org/html/2312.10065
4
+
5
+ Published Time: Tue, 19 Dec 2023 15:43:16 GMT
6
+
7
+ Markdown Content:
8
+ Adhithya Saravanan 1,,{}^{,}start_FLOATSUPERSCRIPT , end_FLOATSUPERSCRIPT 2, Rafal Kocielnik 2, Roy Jiang 2, Pengrui Han 3, Anima Anandkumar 2,,{}^{,}start_FLOATSUPERSCRIPT , end_FLOATSUPERSCRIPT 4
9
+
10
+ 1 University of Cambridge, 2 California Institute of Technology, 3 Carleton College, 4 Nvidia
11
+
12
+ {aps85@cam.ac.uk, rafalko@caltech.edu}
13
+
14
+ ###### Abstract
15
+
16
+ Text-to-image diffusion models have been adopted into key commercial workflows, such as art generation and image editing. Characterising the implicit social biases they exhibit, such as gender and racial stereotypes, is a necessary first step in avoiding discriminatory outcomes. While existing studies on social bias focus on image generation, the biases exhibited in alternate applications of diffusion-based foundation models remain under-explored. We propose methods that use synthetic images to probe two applications of diffusion models, image editing and classification, for social bias. Using our methodology, we uncover meaningful and significant inter-sectional social biases in Stable Diffusion, a state-of-the-art open-source text-to-image model. Our findings caution against the uninformed adoption of text-to-image foundation models for downstream tasks and services.
17
+
18
+ ![Image 1: Refer to caption](https://arxiv.org/html/2312.10065v1/extracted/5276204/figures/task_bias_preview.png)
19
+
20
+ Figure 1: Impact of social bias in diffusion-based foundation models on downstream tasks uncovered using synthetic test images. Task 1: Diffusion-based editing of images for different intersectional groups results in stereotyped gender flips and skin tone changes (we depict average faces using Facer [[1](https://arxiv.org/html/2312.10065v1/#bib.bib1)], example individual images can be found in §[7.1.2](https://arxiv.org/html/2312.10065v1/#S7.SS1.SSS2 "7.1.2 Synthetic (SD v2.1) Image Edits ‣ 7.1 Image Editing ‣ 7 Appendix ‣ Acknowledgments and Disclosure of Funding ‣ 6 Conclusion ‣ 5 Discussion ‣ 4.2.2 Social Bias in Diffusion-based Classification ‣ 4.2 Results ‣ 4.1 Datasets ‣ 4 Datasets and Results ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models")). Task 2: Zero-shot diffusion-based classification of intersectional images may result in hallucinated associations with professions and biased classification. Here we depict a small representative subset in this classification task, the full set of images can be found in §[7.2.4](https://arxiv.org/html/2312.10065v1/#S7.SS2.SSS4 "7.2.4 Visualisation ‣ 7.2.3 Profession Associations - Individual Comparisons ‣ 7.2.2 Profession Associations with Intersectional Social Identities ‣ 7.2 Classification ‣ 7 Appendix ‣ Acknowledgments and Disclosure of Funding ‣ 6 Conclusion ‣ 5 Discussion ‣ 4.2.2 Social Bias in Diffusion-based Classification ‣ 4.2 Results ‣ 4.1 Datasets ‣ 4 Datasets and Results ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models"). Aggregate results are shown in Figure [3](https://arxiv.org/html/2312.10065v1/#S1.F3 "Figure 3 ‣ 1 Introduction ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models") and details in Table [4.1](https://arxiv.org/html/2312.10065v1/#S4.SS1 "4.1 Datasets ‣ 4 Datasets and Results ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models") and [7.2.2](https://arxiv.org/html/2312.10065v1/#S7.SS2.SSS2 "7.2.2 Profession Associations with Intersectional Social Identities ‣ 7.2 Classification ‣ 7 Appendix ‣ Acknowledgments and Disclosure of Funding ‣ 6 Conclusion ‣ 5 Discussion ‣ 4.2.2 Social Bias in Diffusion-based Classification ‣ 4.2 Results ‣ 4.1 Datasets ‣ 4 Datasets and Results ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models").
21
+
22
+ 1 Introduction
23
+ --------------
24
+
25
+ Recent advances in generative text-to-image models have been fueled by the application of denoising diffusion probabilistic models [[2](https://arxiv.org/html/2312.10065v1/#bib.bib2)]. Notably, DALL-E [[3](https://arxiv.org/html/2312.10065v1/#bib.bib3), [4](https://arxiv.org/html/2312.10065v1/#bib.bib4)], Imagen [[5](https://arxiv.org/html/2312.10065v1/#bib.bib5)], and Stable Diffusion [[6](https://arxiv.org/html/2312.10065v1/#bib.bib6)] have emerged as prominent examples, showcasing their strong visio-linguistic understanding through the production of high-resolution images across diverse contexts.
26
+
27
+ Generative models tackle the challenging task of modeling the underlying data distribution, which often leads to an informative representation of the world that can be utilized for downstream tasks, such as classification. In natural language processing, many successful pre-trained models are generative (i.e., language models). Generative pre-training is also being increasingly adopted for downstream vision tasks [[7](https://arxiv.org/html/2312.10065v1/#bib.bib7), [8](https://arxiv.org/html/2312.10065v1/#bib.bib8)], with recent works achieving competitive results against CLIP on zero-shot image classification, using text-to-image foundation models with no additional training. Other downstream tasks include segmentation [[9](https://arxiv.org/html/2312.10065v1/#bib.bib9)], dense correspondence [[10](https://arxiv.org/html/2312.10065v1/#bib.bib10)], image retrieval [[11](https://arxiv.org/html/2312.10065v1/#bib.bib11)], as well as generative tasks, such as text-guided image editing [[12](https://arxiv.org/html/2312.10065v1/#bib.bib12), [13](https://arxiv.org/html/2312.10065v1/#bib.bib13)] and in-painting.
28
+
29
+ Simultaneously, a growing concern has been raised by works such as [[14](https://arxiv.org/html/2312.10065v1/#bib.bib14)] and [[15](https://arxiv.org/html/2312.10065v1/#bib.bib15)], which underscore the presence of various social biases—ranging from social and religious to sexual orientation—embedded within these models. These biases can be attributed to the contrastive pre-training of CLIP (encoders of most text-to-image models) and generative training of the text-to-image models. This is as the internet-scale datasets used in both these stages reflect and compound the biases in society [[16](https://arxiv.org/html/2312.10065v1/#bib.bib16)], though the tendency of models to amplify imbalances in training data has also been audited [[17](https://arxiv.org/html/2312.10065v1/#bib.bib17)]. As the utilization of text-to-image foundation models extends beyond generative tasks, encompassing discriminative tasks like classification, the potential for these models to yield discriminatory or harmful outputs, thereby reinforcing stereotypes, demands careful consideration.
30
+
31
+ Our Approach: In this work, we probe social bias in two applications of text-to-image foundation models, image editing [[18](https://arxiv.org/html/2312.10065v1/#bib.bib18), [19](https://arxiv.org/html/2312.10065v1/#bib.bib19)] and zero-shot classification [[7](https://arxiv.org/html/2312.10065v1/#bib.bib7), [8](https://arxiv.org/html/2312.10065v1/#bib.bib8)], using bias testing methods designed to resemble downstream workflows. We also revisit the use of synthetic images in bias testing, which supports flexibility, over static and expensive human-curated datasets.
32
+
33
+ ![Image 2: Refer to caption](https://arxiv.org/html/2312.10065v1/extracted/5276204/figures/framework_figures/2.png)
34
+
35
+ Figure 2: Overview of our approach: Our method involves defining two sets of attribute concepts, X 𝑋 X italic_X and Y 𝑌 Y italic_Y, and using either a) synthetically generated images or b) images from curated datasets to represent these concepts. We also define target concept sets, A 𝐴 A italic_A and B 𝐵 B italic_B, to evaluate model behavior in image-based tasks. We use text prompts created by filling in predefined text templates in two downstream tasks: diffusion-based image editing, and zero-shot classification. Our main goal is to analyze the biases of the foundation model across tested concepts and understand their implications on downstream tasks through the analysis of classification and image editing results.
36
+
37
+ Prior work: Recent works predominantly assess bias in text-to-image models using two methods: 1) comparisons in CLIP embedding space [[20](https://arxiv.org/html/2312.10065v1/#bib.bib20), [15](https://arxiv.org/html/2312.10065v1/#bib.bib15)], and 2) attribute (e.g. race, gender) classifiers [[14](https://arxiv.org/html/2312.10065v1/#bib.bib14)] . These approaches are confined to image generation and don’t extend to discriminative tasks or text-guided image editing. Krojer et al. [[11](https://arxiv.org/html/2312.10065v1/#bib.bib11)] present biases in image retrieval but rely on human-curated datasets. Perera et al. [[21](https://arxiv.org/html/2312.10065v1/#bib.bib21)] investigate the impact of training data on social bias in diffusion-based face generation models. The utilization of synthetic image data as supplementary training data to address fairness discrepancies across social groups in recognition tasks has been explored in previous studies [[22](https://arxiv.org/html/2312.10065v1/#bib.bib22), [23](https://arxiv.org/html/2312.10065v1/#bib.bib23), [24](https://arxiv.org/html/2312.10065v1/#bib.bib24)]. There has also been efforts to benchmark recognition models using synthetic data by perturbing attributes, using GANs, to assess accuracy [[25](https://arxiv.org/html/2312.10065v1/#bib.bib25)]. Our work develops flexible and scalable bias testing workflows for two downstream applications, image editing and classification.
38
+
39
+ Findings: In our experiments, we use a neutral 1 1 1 A photo without any aspects revealing the tested attributes (e.g., clothing indicative of a particular profession) photo representing a social identity and prompt the model to edit it into a specific profession, mimicking real-world applications such as professional head-shot generation [[26](https://arxiv.org/html/2312.10065v1/#bib.bib26)]. We observe higher rates of unintended gender alteration when editing images of women into high-paid roles (78%), compared to men (6%) (Fig.[3](https://arxiv.org/html/2312.10065v1/#S1.F3 "Figure 3 ‣ 1 Introduction ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models")-Left). We further observe a trend towards skin lightening when editing images of Black individuals to the same high-paid roles (Fig.[3](https://arxiv.org/html/2312.10065v1/#S1.F3 "Figure 3 ‣ 1 Introduction ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models")-Middle), and to a lesser extent when editing to low-paid roles.
40
+
41
+ We also analyzed the use of Stable Diffusion as a classifier, following [[8](https://arxiv.org/html/2312.10065v1/#bib.bib8)]. Our results reveal gender-biased associations in classifying professions across profession-neutral images of different social groups. For instance, in binary classification, between a male- and female-dominated profession, the male-dominated profession was selected for synthetic images of Males 64% of the time compared to 28% for images of Females (Fig. [3](https://arxiv.org/html/2312.10065v1/#S1.F3 "Figure 3 ‣ 1 Introduction ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models")-Right). This indicates a strong learned relationship between visual cues concerning attributes, such as gender, and target concepts, such as professions. The bias towards stereotyped professions also amplifies when the number of noise samples used to calculate the classification objective is increased — a hyper-parameter linked to higher classification accuracy [[8](https://arxiv.org/html/2312.10065v1/#bib.bib8), [7](https://arxiv.org/html/2312.10065v1/#bib.bib7)] (Fig. [3](https://arxiv.org/html/2312.10065v1/#S1.F3 "Figure 3 ‣ 1 Introduction ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models")-Right). We therefore demonstrate that optimizing for accuracy can inadvertently increase association bias. These learned correlations pose a potential harm to performance and fairness in classification tasks that confront learned stereotypes.
42
+
43
+ ![Image 3: Refer to caption](https://arxiv.org/html/2312.10065v1/extracted/5276204/figures/DownstreamBias2-combined.png)
44
+
45
+ Figure 3: Left: Percentage of flips in gender (CLIP) from editing Male and Female images to high-paid roles in diffusion-based image editing. Middle: Skin Color Changes (↑↑\uparrow↑ change towards lighter skin color using an established methodology described in §[3](https://arxiv.org/html/2312.10065v1/#S3 "3 Methodology ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models")) from editing images of White and Black individuals using high-paid prompts in diffusion-based image editing. Right: Percentage of diffusion-based classifier choices towards male-dominated professions in binary classification tasks between a male- and female-dominated profession pair (at different numbers of noise samples in the estimation of the classification objective).
46
+
47
+ Contributions: In this work we offer the following contributions:
48
+
49
+ * •To our best knowledge, we are the first to define bias testing methods for two downstream applications of text-to-image foundation models: image-editing and zero-shot classification. We leverage synthetic images to support flexibility and scalability.
50
+ * •We run experiments on Stable Diffusion with these downstream tasks and show the presence of severe social biases across professions for various intersectional groups.
51
+ * •We show that increasing hyper-parameters that improve performance in downstream tasks, including the number of noise samples (classification), also inadvertently amplifies social bias.
52
+
53
+ 2 Preliminaries
54
+ ---------------
55
+
56
+ Social Bias in ML: Intersectional social bias refers to the overlapping and inter-dependent forms of discrimination that individuals face due to any combination of their race, gender, class, sexuality or any other identity factors. Several works have studied how intersectionality affects the manifestation of bias in ML, including in word embeddings ([[27](https://arxiv.org/html/2312.10065v1/#bib.bib27), [28](https://arxiv.org/html/2312.10065v1/#bib.bib28)], language ([[29](https://arxiv.org/html/2312.10065v1/#bib.bib29), [30](https://arxiv.org/html/2312.10065v1/#bib.bib30), [31](https://arxiv.org/html/2312.10065v1/#bib.bib31)]) and image-generation ([[15](https://arxiv.org/html/2312.10065v1/#bib.bib15), [14](https://arxiv.org/html/2312.10065v1/#bib.bib14)] models. Another consideration is the distinction between extrinsic and intrinsic bias, described in [[32](https://arxiv.org/html/2312.10065v1/#bib.bib32)] as the biases that originate from pre-training and fine-tuning, respectively. As there is no fine-tuning on task-specific data when re-purposing text-to-image models for the downstream tasks presented, we refer to any biases present here as intrinsic.
57
+
58
+ Diffusion models: Details regarding Denoising Diffusion Models are found in [[33](https://arxiv.org/html/2312.10065v1/#bib.bib33), [2](https://arxiv.org/html/2312.10065v1/#bib.bib2), [34](https://arxiv.org/html/2312.10065v1/#bib.bib34), [35](https://arxiv.org/html/2312.10065v1/#bib.bib35)].
59
+
60
+ Diffusion-based Image Editing: In CLIP latent space models (e.g., [[4](https://arxiv.org/html/2312.10065v1/#bib.bib4), [6](https://arxiv.org/html/2312.10065v1/#bib.bib6)]), image generation initializes diffusion from a random latent vector, whereas image editing initializes from an embedding of the image to be edited [[12](https://arxiv.org/html/2312.10065v1/#bib.bib12), [13](https://arxiv.org/html/2312.10065v1/#bib.bib13)]. Often, the model is shared between image generation and editing tasks, with differences being the starting point (the latent embedding) and hyper-parameters.
61
+
62
+ A crucial hyper-parameter, “strength”, defaulting to 0.8 (max: 1.0), controls noise addition to the reference image. Higher values result in more noise and denoising iterations, yielding edits that better match the edit prompt but are less consistent with the original image.
63
+
64
+ Diffusion-based Classification: The key idea in diffusion-based classification is that a diffusion model’s ability to denoise a noised image, given a text description of a label, is a proxy for that label’s likelihood [[7](https://arxiv.org/html/2312.10065v1/#bib.bib7), [8](https://arxiv.org/html/2312.10065v1/#bib.bib8)]. The classification objective, the evidence lower bound (ELBO), is defined as a Monte-Carlo estimate of the expected noise reconstruction losses (ϵ italic-ϵ\epsilon italic_ϵ-prediction loss), obtained by repeatedly adding Gaussian random-noise, ϵ italic-ϵ\epsilon italic_ϵ, to the image.
65
+
66
+ Among various hyper-parameters in the design of the classifier, the most critical one is the number of noise samples employed to compute the classification objective. For an input image 𝒙 𝒙\bm{x}bold_italic_x and class set 𝑪 𝑪\bm{C}bold_italic_C, the text prompt 𝒄 𝒊 subscript 𝒄 𝒊\bm{c_{i}}bold_italic_c start_POSTSUBSCRIPT bold_italic_i end_POSTSUBSCRIPT that minimizes noise reconstruction loss is chosen:
67
+
68
+ arg⁡min c i∈C⁡E t,ϵ⁢‖ϵ−ϵ θ⁢(x t,c i)‖2 2 subscript subscript 𝑐 𝑖 𝐶 subscript 𝐸 𝑡 italic-ϵ superscript subscript norm italic-ϵ subscript italic-ϵ 𝜃 subscript 𝑥 𝑡 subscript 𝑐 𝑖 2 2\arg\min_{c_{i}\in C}E_{t,\epsilon}\left\|\epsilon-\epsilon_{\theta}(x_{t},c_{% i})\right\|_{2}^{2}\ roman_arg roman_min start_POSTSUBSCRIPT italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ italic_C end_POSTSUBSCRIPT italic_E start_POSTSUBSCRIPT italic_t , italic_ϵ end_POSTSUBSCRIPT ∥ italic_ϵ - italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_c start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ∥ start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT(1)
69
+
70
+ 3 Methodology
71
+ -------------
72
+
73
+ Synthetic data generation: We use stable-diffusion-2-1 (SD v2.1) for synthetic dataset creation. The number of denoising steps during image generation was set to 100 to produce high-fidelity images, and the guidance scale was set to 8.5. We fixed the dataset generation prompt template as: “A color photograph of a _____, headshot, high-quality.” based on [[36](https://arxiv.org/html/2312.10065v1/#bib.bib36)].
74
+
75
+ Downstream tasks setup: We demonstrate our bias testing methods on downstream applications of Stable Diffusion model stable-diffusion-2-1. For classification, we use the default set-up [[8](https://arxiv.org/html/2312.10065v1/#bib.bib8)]. We vary the number of noise samples in the calculation of the classification objective, ELBO (1, 10 and 100 samples). Zero-shot classification prompts follow the template: “A portrait of a _____.”
76
+
77
+ We use StableDiffusionImg2ImgPipeline from Hugging Face, which uses the diffusion-denoising mechanism proposed in [[37](https://arxiv.org/html/2312.10065v1/#bib.bib37)], for text-guided image editing. We vary edit strength and report results for 0.6, 0.8 (default) and 1. We use default values for number of inference steps (50) and guidance scale (7.5). We construct edit prompts using the template: “A color photograph of a _____, headshot, high-quality.”, in line with synthetic dataset generation prompts.
78
+
79
+ Bias in Diffusion-based Image Editing: Bias testing in image generation focuses on determining attributes of the images generated for a target concept prompt, while bias testing in editing must examine changes in pre-existing visual attributes. We quantify changes during editing through zero-shot gender classification using CLIP, between ‘man’ and ‘woman’, as in [[38](https://arxiv.org/html/2312.10065v1/#bib.bib38)]. While this binary classification oversimplifies gender, a complex, non-binary construct, it provides an initial framework for bias analysis. We employ Facer [[1](https://arxiv.org/html/2312.10065v1/#bib.bib1)], an open-source Python package, to compute the average face of sets of original and edited images. Predicting race based on visual cues is challenging, especially through CLIP [[39](https://arxiv.org/html/2312.10065v1/#bib.bib39)]. Instead, we focus on skin color as a quantifiable metric, employing the Individual Typology Angle (ITA) [[40](https://arxiv.org/html/2312.10065v1/#bib.bib40)] as a proxy. We use the YCbCr algorithm [[41](https://arxiv.org/html/2312.10065v1/#bib.bib41)] to determine skin pixels from the average faces, and calculate the ITA, a statistical dermatology value, from their RGB values, through the implementation used in [[42](https://arxiv.org/html/2312.10065v1/#bib.bib42), [43](https://arxiv.org/html/2312.10065v1/#bib.bib43)]. ITA is versatile as it is also commonly mapped to discrete skin-tone classes, such as the Fitzpatrick Scale [[44](https://arxiv.org/html/2312.10065v1/#bib.bib44)].
80
+
81
+ Bias in Diffusion-based Classification: We introduce attribute sets 𝑿 𝑿\bm{X}bold_italic_X and 𝒀 𝒀\bm{Y}bold_italic_Y (e.g., terms for male and female) and target sets 𝑨 𝑨\bm{A}bold_italic_A and 𝑩 𝑩\bm{B}bold_italic_B (e.g., professions dominated by each gender). We consider image datasets 𝓓 𝑿 subscript 𝓓 𝑿\bm{\mathcal{D}_{X}}bold_caligraphic_D start_POSTSUBSCRIPT bold_italic_X end_POSTSUBSCRIPT and 𝓓 𝒀 subscript 𝓓 𝒀\bm{\mathcal{D}_{Y}}bold_caligraphic_D start_POSTSUBSCRIPT bold_italic_Y end_POSTSUBSCRIPT, which can be synthetic, generated by a generator G 𝐺 G italic_G, or human-curated, and assume neutrality concerning the concepts in 𝑨 𝑨\bm{A}bold_italic_A and 𝑩 𝑩\bm{B}bold_italic_B. By classifying images into profession pairs from 𝑨 𝑨\bm{A}bold_italic_A and 𝑩 𝑩\bm{B}bold_italic_B and averaging the results, we gauge the attribute-to-target concept association. We introduce an association measure, and a differential variant, to quantify the differences in the associations of 𝑿 𝑿\bm{X}bold_italic_X and 𝒀 𝒀\bm{Y}bold_italic_Y. Note that c 𝑐 c italic_c is the decision of the classifier.
82
+
83
+ S⁢(𝓓,𝑨,𝑩)=a⁢v⁢g x∈𝓓⁢a⁢v⁢g(a,b)∈𝑨×𝑩⁢p⁢(c=a|{a,b},x)𝑆 𝓓 𝑨 𝑩 𝑥 𝓓 𝑎 𝑣 𝑔 𝑎 𝑏 𝑨 𝑩 𝑎 𝑣 𝑔 𝑝 𝑐 conditional 𝑎 𝑎 𝑏 𝑥 S(\bm{\mathcal{D}},\bm{A},\bm{B})=\underset{x\in\bm{\mathcal{D}}}{avg}\;% \underset{(a,b)\in\bm{A}\times\bm{B}}{avg}p({c=a}|\{a,b\},x)\ italic_S ( bold_caligraphic_D , bold_italic_A , bold_italic_B ) = start_UNDERACCENT italic_x ∈ bold_caligraphic_D end_UNDERACCENT start_ARG italic_a italic_v italic_g end_ARG start_UNDERACCENT ( italic_a , italic_b ) ∈ bold_italic_A × bold_italic_B end_UNDERACCENT start_ARG italic_a italic_v italic_g end_ARG italic_p ( italic_c = italic_a | { italic_a , italic_b } , italic_x )(2)
84
+
85
+ S⁢(𝓓 𝑿,𝓓 𝒀,𝑨,𝑩)=S⁢(𝓓 𝑿,𝑨,𝑩)−S⁢(𝓓 𝒀,𝑨,𝑩)∈[−1,1]𝑆 subscript 𝓓 𝑿 subscript 𝓓 𝒀 𝑨 𝑩 𝑆 subscript 𝓓 𝑿 𝑨 𝑩 𝑆 subscript 𝓓 𝒀 𝑨 𝑩 1 1 S(\bm{\mathcal{D}_{X}},\bm{\mathcal{D}_{Y}},\bm{A},\bm{B})=S(\bm{\mathcal{D}_{% X}},\bm{A},\bm{B})-S(\bm{\mathcal{D}_{Y}},\bm{A},\bm{B})\in\>\>[-1,1]italic_S ( bold_caligraphic_D start_POSTSUBSCRIPT bold_italic_X end_POSTSUBSCRIPT , bold_caligraphic_D start_POSTSUBSCRIPT bold_italic_Y end_POSTSUBSCRIPT , bold_italic_A , bold_italic_B ) = italic_S ( bold_caligraphic_D start_POSTSUBSCRIPT bold_italic_X end_POSTSUBSCRIPT , bold_italic_A , bold_italic_B ) - italic_S ( bold_caligraphic_D start_POSTSUBSCRIPT bold_italic_Y end_POSTSUBSCRIPT , bold_italic_A , bold_italic_B ) ∈ [ - 1 , 1 ](3)
86
+
87
+ 4 Datasets and Results
88
+ ----------------------
89
+
90
+ ### 4.1 Datasets
91
+
92
+ Human-Curated Dataset: We run our analyses on the human-curated Chicago Face Dataset (CFD) [[45](https://arxiv.org/html/2312.10065v1/#bib.bib45)]. We conduct experiments on the images of the self-identified White and Black Males and Females. We use the whole dataset for classification, and randomly sampled 25 neutral facial-expression images, for each social group, for image-editing.
93
+
94
+ Synthetic Data: We also generate synthetic datasets containing 256 images for a range of inter-sectional social identities (Caucasian and African-American men and women). We use the whole dataset for classification, and randomly sampled 25 images, for each social group, for image-editing.
95
+
96
+ Biases: We focus on professions, common for testing social biases in generative models [[46](https://arxiv.org/html/2312.10065v1/#bib.bib46)]. For image editing, we focus on the two highest paid professions: ’doctors’ and ’CEOs’, and the two lowest paid professions, ’dishwashers’ (‘dishwasher-worker’ used to avoid generations of the appliance) and ’fast-food workers’, as per US Labour Statistics [[47](https://arxiv.org/html/2312.10065v1/#bib.bib47)]. For classification, we pick the five top male and female-dominated professions, according to US Labor Statistics [[46](https://arxiv.org/html/2312.10065v1/#bib.bib46)]. Male-dominated roles include ‘carpenters’, ‘plumbers’, ‘truck drivers’, ‘mechanics’, and ‘construction workers’ and female-dominated include ‘babysitters’, ‘secretaries’, ‘housekeepers’, ‘nurses’, and ‘receptionists’.
97
+
98
+ @l l l r r r r r r@ Dataset&Social Identity (X 𝑋 X italic_X)Edit concepts Δ Δ\Delta roman_Δ Gender (CLIP)Δ Δ\Delta roman_Δ Skin-Color (ITA)
99
+
100
+ Edit strength 0.6 0.8 1.0 0.6 0.8 1.0
101
+
102
+ CFD White Female High-paid professions 0.18 0.48 0.76
103
+
104
+ White Male 0.20 0.20 0.08
105
+
106
+ Black Female 0.20 0.42 0.72
107
+
108
+ Black Male 0.10 0.04 0.08
109
+
110
+ SD v2.1 Caucasian-Woman 0.04 0.24 0.84
111
+
112
+ Caucasian-Man 0 0.02 0.06
113
+
114
+ African-Amer. Woman 0.02 0.36 0.78
115
+
116
+ African-Amer. Man 0 0 0.02
117
+
118
+ CFD White Female Low-paid professions 0.02 0.08 0.30
119
+
120
+ White Male 0.38 0.62 0.56
121
+
122
+ Black Female 0.02 0.16 0.28
123
+
124
+ Black Male 0.22 0.42 0.58
125
+
126
+ SD v2.1 Caucasian Woman 0.06 0.20 0.48
127
+
128
+ Caucasian Man 0.02 0.20 0.36
129
+
130
+ African-Amer. Woman 0.06 0.38 0.50
131
+
132
+ African-Amer. Man 0.22 0.32 0.44
133
+
134
+ Table 1: For each row, we edit 25 original images into two professions. The high-paid professions are doctor and CEO. The low-paid professions are ‘dishwasher’ and ‘fastfood-worker’. This results in 50 edited images, per edit strength, in each row. ‘Change in gender (CLIP)’ column: Percentage of edited images that are different in gender from original image. We embolden results where more than half the edits alter the gender. ‘Change in skin-color (ITA)’ column: Change in ITA between the average face of the edited set and the original set of images ( - skin becomes darker, - skin becomes lighter). We embolden changes over ±plus-or-minus\pm± 15 points. Absolute ITA values are found in Appendix [7.1.4](https://arxiv.org/html/2312.10065v1/#S7.SS1.SSS4 "7.1.4 Average Faces - Skin-color (ITA) Values ‣ 7.1 Image Editing ‣ 7 Appendix ‣ Acknowledgments and Disclosure of Funding ‣ 6 Conclusion ‣ 5 Discussion ‣ 4.2.2 Social Bias in Diffusion-based Classification ‣ 4.2 Results ‣ 4.1 Datasets ‣ 4 Datasets and Results ‣ Exploring Social Bias in Downstream Applications of Text-to-Image Foundation Models").