File size: 5,305 Bytes
eda11e0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b6947bd
8431a47
 
 
b6947bd
eda11e0
b6947bd
 
eda11e0
b6947bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6779ea
fd21553
b6947bd
 
 
 
 
 
 
 
 
 
 
 
 
eda11e0
 
 
 
b6947bd
 
 
 
 
 
0a4911b
 
 
 
 
 
 
 
 
 
 
fe601a0
 
 
 
 
 
 
 
 
b6947bd
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139

---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
dataset: NYUAD-ComNets/Indian_Male_Profession
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
    

# Model description

This model is a part of project targeting Debiasing of generative stable diffusion models.

LoRA text2image fine-tuning - NYUAD-ComNets/Indian_Male_Profession_Model

These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were fine-tuned on the NYUAD-ComNets/Indian_Male_Profession dataset. 
You can find some example images.

prompt: a photo of a {profession}, looking at the camera, closeup headshot facing forward, ultra quality, sharp focus

# How to use this model:

``` python


import torch
from compel import Compel, ReturnedEmbeddingsType
from diffusers import DiffusionPipeline

import random


negative_prompt = "cartoon, anime, 3d, painting, b&w, low quality" 


models=["NYUAD-ComNets/Asian_Female_Profession_Model","NYUAD-ComNets/Black_Female_Profession_Model","NYUAD-ComNets/White_Female_Profession_Model",
"NYUAD-ComNets/Indian_Female_Profession_Model","NYUAD-ComNets/Latino_Hispanic_Female_Profession_Model","NYUAD-ComNets/Middle_Eastern_Female_Profession_Model",
"NYUAD-ComNets/Asian_Male_Profession_Model","NYUAD-ComNets/Black_Male_Profession_Model","NYUAD-ComNets/White_Male_Profession_Model",
"NYUAD-ComNets/Indian_Male_Profession_Model","NYUAD-ComNets/Latino_Hispanic_Male_Profession_Model","NYUAD-ComNets/Middle_Eastern_Male_Profession_Model"]

adapters=["asian_female","black_female","white_female","indian_female","latino_female","middle_east_female",
"asian_male","black_male","white_male","indian_male","latino_male","middle_east_male"]

pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", variant="fp16", use_safetensors=True, torch_dtype=torch.float16).to("cuda")


for i,j in zip(models,adapters):
    pipeline.load_lora_weights(i, weight_name="pytorch_lora_weights.safetensors",adapter_name=j) 


prof='doctor'
    

pipeline.set_adapters(random.choice(adapters))


compel = Compel(tokenizer=[pipeline.tokenizer, pipeline.tokenizer_2] ,
                    text_encoder=[pipeline.text_encoder, pipeline.text_encoder_2],
                    returned_embeddings_type=ReturnedEmbeddingsType.PENULTIMATE_HIDDEN_STATES_NON_NORMALIZED, 
                    requires_pooled=[False, True],truncate_long_prompts=False)

    
conditioning, pooled = compel("a photo of a {}, looking at the camera, closeup headshot facing forward, ultra quality, sharp focus".format(prof)) 

negative_conditioning, negative_pooled = compel(negative_prompt)
[conditioning, negative_conditioning] = compel.pad_conditioning_tensors_to_same_length([conditioning, negative_conditioning])

image = pipeline(prompt_embeds=conditioning, negative_prompt_embeds=negative_conditioning,
                     pooled_prompt_embeds=pooled, negative_pooled_prompt_embeds=negative_pooled,
                     num_inference_steps=40).images[0]

image.save('/../../x.jpg')

```


# Examples

| | | |
|:-------------------------:|:-------------------------:|:-------------------------:|
|<img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./285.jpg"> |  <img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./362.jpg">|<img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./212.jpg">|
|<img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./23.jpg"> |  <img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./89.jpg">|<img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./43.jpg">|
|<img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./image_6.png"> |  <img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./image_7.png">|<img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./image_8.png">|
|<img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./image_9.png"> |  <img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./image_10.png">|<img width="500" alt="screen shot 2017-08-07 at 12 18 15 pm" src="./image_11.png">|




# Training data

NYUAD-ComNets/Indian_Male_Profession dataset was used to fine-tune stabilityai/stable-diffusion-xl-base-1.0



# Configurations

LoRA for the text encoder was enabled: False.

Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.



# BibTeX entry and citation info

```
@article{aldahoul2025ai,
  title={AI-generated faces influence gender stereotypes and racial homogenization},
  author={AlDahoul, Nouar and Rahwan, Talal and Zaki, Yasir},
  journal={Scientific reports},
  volume={15},
  number={1},
  pages={14449},
  year={2025},
  publisher={Nature Publishing Group UK London}
}


@article{aldahoul2024ai,
  title={AI-generated faces free from racial and gender stereotypes},
  author={AlDahoul, Nouar and Rahwan, Talal and Zaki, Yasir},
  journal={arXiv preprint arXiv:2402.01002},
  year={2024}
}


@misc{ComNets,
      url={[https://huggingface.co/NYUAD-ComNets/Indian_Male_Profession_Model](https://huggingface.co/NYUAD-ComNets/Indian_Male_Profession_Model)},
      title={Indian_Male_Profession_Model},
      author={Nouar AlDahoul, Talal Rahwan, Yasir Zaki}
}
```