Buckets:

|
download
raw
204 kB

Title: Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis

URL Source: https://arxiv.org/html/2209.08891

Published Time: Thu, 11 Jan 2024 01:58:27 GMT

Markdown Content: \name Lukas Struppek \email struppek@cs.tu-darmstadt.de

\name Dominik Hintersdorf \email hintersdorf@cs.tu-darmstadt.de

\addr Technical University of Darmstadt \AND\name Felix Friedrich \email friedrich@cs.tu-darmstadt.de

\addr Technical University of Darmstadt,

Hessian Center for AI (hessian.AI) \AND\name Manuel Brack \email brack@cs.tu-darmstadt.de

\addr German Center for Artificial Intelligence (DFKI),

Technical University of Darmstadt \AND\name Patrick Schramowski \email schramowski@cs.tu-darmstadt.de

\addr German Center for Artificial Intelligence (DFKI),

Technical University of Darmstadt,

Hessian Center for AI (hessian.AI), LAION \AND\name Kristian Kersting \email kersting@cs.tu-darmstadt.de

\addr Technical University of Darmstadt,

Centre for Cognitive Science of Darmstadt,

Hessian Center for AI (hessian.AI),

German Center for Artificial Intelligence (DFKI)

Abstract

Models for text-to-image synthesis, such as DALL-E 2 and Stable Diffusion, have recently drawn a lot of interest from academia and the general public. These models are capable of producing high-quality images that depict a variety of concepts and styles when conditioned on textual descriptions. However, these models adopt cultural characteristics associated with specific Unicode scripts from their vast amount of training data, which may not be immediately apparent. We show that by simply inserting single non-Latin characters in the textual description, common models reflect cultural biases in their generated images. We analyze this behavior both qualitatively and quantitatively and identify a model’s text encoder as the root cause of the phenomenon. Such behavior can be interpreted as a model feature, offering users a simple way to customize the image generation and reflect their own cultural background. Yet, malicious users or service providers may also try to intentionally bias the image generation. One goal might be to create racist stereotypes by replacing Latin characters with similarly-looking characters from non-Latin scripts, so-called homoglyphs. To mitigate such unnoticed script attacks, we propose a novel homoglyph unlearning method to fine-tune a text encoder, making it robust against homoglyph manipulations.

1 Introduction

Image 1: Refer to caption

Figure 1: Example of homoglyph manipulations and the resulting cultural biases in the DALL-E 2 pipeline. The model has been queried with the prompt "A photo o f an actress". Using only Latin characters in the text, the model generates pictures of people with female appearances and different cultural backgrounds. However, replacing the o in the text with visually barely distinguishable characters, so-called homoglyphs, from the Korean (Hangul), Indian (Oriya), or Arabic script leads to the generation of images that clearly reflect cultural stereotypes and influences, including facial features, clothing, and jewelry. Underline ( ) is used only to indicate the manipulation that otherwise could barely be seen with the naked eye.

In recent months, text-driven image-generation models have received a lot of attention from researchers and the public. Provided with a simple textual description, the so-called prompt, they are able to generate high-quality images from different domains and styles. These models are trained on large collections of public data from the internet, yet little is known about their learned representation and behavior. Previous research on text-guided image generation mainly focused on improving the generated images’ quality and the models’ understanding of complex textual descriptions(citep@BBN(Song & Ermon,, 2020; Nichol & Dhariwal,, 2021; Hong et al.,, 2023; Saharia et al.,, 2022; Nichol et al.,, 2022)).

Our research takes another direction and showcases the models’ surprising behavior on prompts containing single non-Latin characters. Common text-to-image synthesis models are already known to be biased towards various societal representations, such as gender and ethnicity(citep@BBN(Bianchi et al.,, 2023; Schramowski et al.,, 2023; Luccioni et al.,, 2023; Friedrich et al.,, 2023)), if prompted with standard Latin characters. We go one step further and show that cultural biases and stereotypes can explicitly be triggered by inserting single non-Latin characters into a prompt. For example, DALL-E 2(citep@BBN(Ramesh et al.,, 2022)) generates facial images with Asian or Indian appearance and stereotypes when provided with a generic description of a person and a single character replaced with a Korean or Indian character, as illustrated in Fig.1. We identified similar behavior across different models, domains, and Unicode scripts, where the insertion of a single non-Latin character is sufficient to induce cultural biases in the generated images.

Algorithmic fairness and discriminatory behavior are well-known, extensively researched (citep@BBN(Pastaltzidis et al.,, 2022; Buyl et al.,, 2022; Kasy & Abebe,, 2021; Kallus & Zhou,, 2021; Mehrabi et al.,, 2022)), and of great interest even outside the academic community(citep@BBN(Mac,, 2021)). In contrast, we show that Stable Diffusion(citep@BBN(Rombach et al.,, 2022)) and DALL-E 2 are very character-sensitive, and biased behavior can also be triggered explicitly on a character level. By adding non-Latin characters from local language scripts, users can move the image generation closer to their individual culture and break away from existing Western biases. It enables a simple way to express certain cultures in the image generation without requiring major changes to the prompt.

However, malicious parties might also misuse this model behavior to intentionally add specific cultural stereotypes to cause harm. For example, a malicious prompt engineering tool could be used to force the generation of offensive or discriminatory images from benign text descriptions, harming users or a model’s reputation. Imagine a user generating images of ”an evil person”, but instead of resulting in images depicting people of various groups of society, the model only generates faces reflecting a specific group. While this is clearly an undesired bias, users might not be aware of this fact due to their own (implicit) biases. Still, the results may affirm human stereotypes and be perceived as racist or discriminatory.

In this work, we present the first study of text-guided image generation models when conditioned on descriptions that contain non-Latin Unicode characters. Our research demonstrates that replacing standard Latin characters with visually similar ones, so-called homoglyphs, allows any party to disrupt the image generation while making the manipulations hard to detect with the naked eye. More importantly, we show that homoglyphs from non-Latin scripts not only influence the image generation in general but also induce stereotypes and biases from the cultural circle of the corresponding scripts. We emphasize that such model behavior is not compellingly bad and may even be desirable, as it allows the models to reflect subtle input nuances and can help address Western bias. However, a deeper understanding of this behavior is necessary for responsible model usage.

Throughout our work, we generally refer to the cultural and ethnic characteristics associated with certain language scripts as cultural biases. While bias is usually negatively connoted, it is important to clarify that we utilize this term in its neutral interpretation. Our intention is to portray how models’ behaviors and outcomes can change when faced with non-Latin characters. More precisely, our understanding follows the definition of the American Psychological Association, which defines a bias as an inclination or predisposition for or against something(citep@BBN(American Psychological Association,, 2023)).

For our analysis, we further distinguish between sensitive and non-sensitive biases. Sensitive biases encompass concepts and representations that, if subject to manipulations, could be construed as offensive and discriminatory within specific contexts. For example, altering the appearance of people towards a certain culture could promote racial stereotypes. The manipulation of such sensitive concepts has the potential to be exploited for harm, thus necessitating cautious handling. On the other hand, non-sensitive biases predominantly apply to broad concepts like food or architectural style. Although these concepts may be undeniably shaped by cultural influences, their nature is less inherently discriminatory. Nevertheless, establishing a clear distinction between sensitive and non-sensitive biases proves challenging due to the subjective nature of biases and their consequences, which are shaped by an individual’s societal background and personal experience.

In summary, we make the following contributions:

  • •We demonstrate that text-guided image generation models are sensitive to character encodings and implicitly learn cultural biases related to different scripts during their training on large-scale public data.
  • •We qualitatively and quantitatively show that by injecting as little as a single homoglyph at a random position, a user can skew the image generation and introduce cultural influences and stereotypes into the generated images.
  • •We develop a novel homoglyph unlearning procedure to make already trained text encoders robust to homoglyph manipulations and remove their biased behavior.

The paper is organized as follows. We first provide an introduction to text-to-image synthesis, together with related work on fairness, biases, and security concerns for generative models in Section 2. We further devise our methodology on character manipulations in Section 3, along with metrics for assessing their influence, and introduce a novel homoglyph unlearning approach for mitigating homoglyph manipulations. Our empirical findings, which we present in Section 4 and expand on in Section 5, raise concerns about how much we actually understand about the internal function of multimodal models trained on public data, and how minor variations in the textual description by inserting a single non-Latin character at a random position may already affect the generation of images. Such insights are crucial for an informed and secure use, as text-to-image synthesis models become widely accessible and offer a vast range of applications.

Disclaimer: This paper depicts images of various cultural biases and stereotypes that some readers may find offensive. We emphasize that the goal of this work is to investigate how homoglyph manipulations can be exploited to trigger such biases, which are already present in text-guided image generation models, and, more importantly, how we could mitigate them. We do not intend to discriminate against identity groups or cultures in any way.

2 Background and Related Work

We first provide an overview of text-guided image generation models in Section 2.1, and present related research on biases and fairness of generative models in Section 2.2. We then formally introduce homoglyphs and describe related security attacks, including ones against multimodal machine learning models, in Section 2.3.

2.1 Text-To-Image Synthesis

In the last few years, training models on multimodal data has received much attention. Recent approaches for contrastive learning on image-text pairs are powered by a large number of images and their corresponding descriptions collected from the internet. One of the most prominent representatives is CLIP (Contrastive Language-Image Pre-training)(citep@BBN(Radford et al.,, 2021)), which combines a text and image encoding network. In a contrastive learning fashion, both components are jointly trained to match corresponding image-text pairings. After being trained on 400M internet-sourced samples, CLIP provides meaningful representations of images and their textual descriptions and is able to successfully complete a variety of tasks with zero-shot transfer and no additional training required(citep@BBN(Radford et al.,, 2021)). The learned representations can further facilitate other applications by incorporating CLIP into new models.

One such example is the recently introduced text-conditioned image generation model DALL-E 2(citep@BBN(Ramesh et al.,, 2022)). In order to generate images from textual descriptions, the model first computes the CLIP text embeddings and then uses a prior to produce corresponding image embeddings. Finally, an image decoder(citep@BBN(Song & Ermon,, 2020; Ho et al.,, 2020)) is applied to generate images conditioned on the computed image embeddings. Fig.1 gives an overview of the DALL-E 2 pipeline for text-to-image synthesis. Besides DALL-E 2, various other text-guided image generation models have been introduced over the last couple of months. These include its direct predecessors GLIDE(citep@BBN(Nichol et al.,, 2022)) and DALL-E(citep@BBN(Ramesh et al.,, 2021)), Google’s Imagen(citep@BBN(Saharia et al.,, 2022)) and Parti(citep@BBN(Yu et al.,, 2022)), Meta’s Make-A-Scene(citep@BBN(Gafni et al.,, 2022)), and Midjourney(citep@BBN(Midjourney,, 2022)).

Stable Diffusion(citep@BBN(Rombach et al.,, 2022)) is another text-to-image synthesis model that received a lot of attention since it is the first entirely open-sourced model, which makes it particularly relevant for research. All listed models rely heavily on large web-crawled datasets. While machine learning models continue to achieve astonishing new accomplishments, their reliability and fairness become a point of concern. We introduce existing research on biases and fairness in this context in the following section.

2.2 Biases and Fairness in Image Generation Models

A general overview of common problems and pitfalls associated with the collection and uses of machine learning datasets is provided by (citet@BBN Paullada et al., (2020)). (citet@BBN Birhane et al., (2021)) further examined the multimodal LAION-400M(citep@BBN(Schuhmann et al.,, 2021)) dataset, commonly used to train text-guided image generation models, such as Stable Diffusion. The authors found a range of problematic and explicit samples depicting violent, pornographic, and racist motifs. Training large generative models on such datasets leads to the incorporation of stereotypes in the generated content. (citet@BBN Bianchi et al., (2023)) investigated this fact for Stable Diffusion and showed that for words like "terrorist" and "thug", the model generates images depicting stereotypical features of Muslim or African-American people. Even carefully selecting the prompt, so-called prompt engineering, seems insufficient to fully overcome these stereotypes. Our work extends these insights and demonstrates not only words but also single characters are already sufficient to add biases and stereotypes into generated images. As a mitigation strategy, our homoglyph unlearning approach also offers a technical solution to remove such biasing behavior of specific characters.

Since many text-to-image synthesis models are built around CLIP, it is worth noting that previous research also found CLIP itself to be biased in various ways. (citet@BBN Wolfe & Caliskan, (2022)) demonstrated gender, age, and ethnicity biases in the CLIP embedding space. (citet@BBN Wang et al., (2021)) further illustrated that image retrieval based on CLIP is gender-imbalanced for gender-neutral queries. The AI Index Report(citep@BBN(Zhang et al.,, 2022)) also highlighted CLIP’s various biases, including gender and historical biases. Throughout our analysis, we also identified CLIP as the main driving force behind character-induced biases.

2.3 Homoglyphs and Related Attacks in the Context of Machine Learning

In contrast to earlier research that focused on the biasing behavior of models for standard inputs, we analyze the impact of individual characters in multimodal text-to-image systems. For the first time, we demonstrate that these models capture cultural biases that can be easily triggered by non-Latin characters. We pay special attention to non-Latin homoglyphs as they are challenging to detect with the naked eye. Homoglyphs are letters and numbers that are difficult for people and optical character recognition systems to differentiate because they appear identical or very similar. For example, the written small letter l and the digit 1 are easy to confuse. The visual similarity of homoglyphs also depends a lot on the font used. Fig.2 depicts some examples of homoglyphs from various Unicode scripts, where minor differences are visible in direct comparison. However, a direct comparison is usually not possible for a user. Especially when characters from different scripts are inserted unexpectedly, it is almost impossible to recognize them.

Image 2: Refer to caption

Figure 2: Examples of Unicode homoglyphs from different scripts with their Unicode identifier and description. Whereas the visual differences between some characters as part of a sentence might be spotted by an attentive user or character recognition system, several characters look almost identical, especially in some fonts used by common command line interfaces and APIs. Corresponding homoglyph attacks are, therefore, difficult to spot by visual inspection.

Unicode(citep@BBN(Unicode Consortium,, 2022)) homoglyphs play a special role in computer science and digital text processing. Unicode is a universal character encoding that is the standard for text processing, storage, and exchange in modern computer systems. The standard does not directly encode characters for specific languages but the underlying modern and historic scripts used by those languages. In a technical sense, Unicode establishes a code space and gives every character or symbol a unique identification number. Unicode homoglyphs formally describe characters from different scripts with separate hexadecimal identifiers but similar visual appearances. For instance, the Latin character Image 3: Refer to caption(U+0041), the Greek character Image 4: Refer to caption(U+0391), and the Cyrillic character Image 5: Refer to caption(U+0410) appear identical, but belong to different scripts. Hence, completely different Unicode identifiers are assigned to each character. While the three characters are visually the same for humans, information systems interpret each character differently, which has already led to Unicode security considerations(citep@BBN(Davis & Suignard,, 2014)).

In the context of natural language, homographs are words that contain one or more homoglyphs from a separate Unicode script. A URL homograph attack, often referred to as script spoofing, involves an attacker registering domain names that appear to be legitimate domains but with some characters replaced by non-Latin homoglyphs. In order to install malware or conduct phishing attacks, a user may be tricked into opening the altered domain(citep@BBN(Gabrilovich & Gontmakher,, 2002; Simpson et al.,, 2020)). (citet@BBN Boucher et al., (2022)) recently described character-based attacks on natural language processing (NLP) systems. The authors introduced imperceptible adversarial perturbations to texts by utilizing homoglyphs, and invisible Unicode symbols, as well as reordering and deleting control characters. The attack could successfully trick various NLP systems. In contrast, we demonstrate that text-to-image synthesis models are similarly vulnerable to homograph attacks and exhibit the intriguing and possibly undesired behavior of reflecting cultural biases in the generated images if non-Latin characters are present.

Although homoglyph insertion into the inputs of a model does not necessarily constitute an attack, it might nevertheless be perceived or misused as such. When homoglyphs are used to trigger sensitive cultural biases in certain contexts, people may feel discriminated against by the cultural and ethnic stereotypes portrayed in the generated images. We, therefore, provide a brief overview of related attacks to place such use cases in the perspective of machine learning research. While various attacks(citep@BBN(Szegedy et al.,, 2014; Goodfellow et al.,, 2015; Shokri et al.,, 2017; Struppek et al., 2022a,; Gao et al.,, 2018; Struppek et al., 2022b,)) have been studied on traditional machine learning models, only a few have been proposed so far in the context of multimodal systems. (citet@BBN Carlini & Terzis, (2022)) demonstrated that models contrastively trained on image-text pairs are equally susceptible to poisoning and backdoor attacks as conventional models. (citet@BBN Hintersdorf et al., (2022)) further showed that CLIP models memorize sensitive information about entities and leak private information about their training data. (citet@BBN Millière, (2022)) developed the first approaches for crafting adversarial examples on text-guided image generation models by constructing fictitious words. However, unlike our homoglyph manipulations, all crafted words and text prompts are written in standard Latin, and humans probably recognize such adversarial examples quickly. (citet@BBN Struppek et al., (2023)) further emphasized that text-guided image generation models based on pre-trained text encoders are highly susceptible to backdoor attacks that take over the image generation process.

3 Methodology for Investigating Character Manipulation

We now introduce the basic methodology behind the investigated settings in Section 3.1 and the metrics used to quantify the cultural biases in Section 3.2. To remove the sensitivity to homoglyphs from an already trained model, we further propose a novel homoglyph unlearning approach in Section 3.3 by fine-tuning a model’s text encoder. Additional details to reproduce our experiments are stated in Section B.

3.1 Experimental Setting

In our investigations, we assumed the user or potential adversary to have black-box access to a text-guided image generation system, such as a text query API. In this way, the user can control a model’s input and observe its output in the form of generated images. Fig.1 illustrates the basic approach behind our analysis. We examined two popular models, namely DALL-E 2 and Stable Diffusion. For DALL-E 2, the official API generates four image variations for every single prompt. Throughout this paper, we always show all four DALL-E 2 generated images from a single query to avoid cherry-picking. Currently, it is not possible to specify a seed for DALL-E 2 for the generation and thus make it deterministic, which limits the possibility of reliably quantifying the effects of non-Latin characters. For Stable Diffusion, we relied on version v1.5 with fixed seeds. We note that during the work on this paper, updated Stable Diffusion versions have been released. We have continued working on version 1.5, but note that findings generally also apply to the updated versions.

We experimented with various Unicode scripts for different languages. Whereas some scripts and their associated cultures might be more commonly known, such as Greek or Cyrillic, others might not. We, therefore, provide an overview of the different scripts we used throughout this work and their associated cultural background in Section A. We emphasize that we mainly focused most of our analyses on homoglyphs, i.e., non-Latin characters that look similar to Latin characters, to investigate their effects in settings where a user is unlikely to spot the manipulations. Except for computing the Relative Bias and VQA Score, which we introduce in the next section, all images were generated with prompts in which we replaced a Latin character with a corresponding homoglyph. The caption of each figure in the paper states the prompt and which characters have been replaced. Whereas most experiments use homoglyphs of the Latin o as an example, we stress that the demonstrated effects also hold for other homoglyphs and non-Latin characters. We selected the Latin o since it offers the most homoglyphs, i.e., visually similar characters in other scripts.

In order to avoid failures and additional biases in the image generation due to unnecessarily complex prompts, we decided to keep the image descriptions simple throughout our experiments. Additionally, we verified that the models could generate meaningful images for all the corresponding Latin-only prompts. This design choice is motivated by the work of (citet@BBN Marcus et al., (2022)) and (citet@BBN Conwell & Ullman, (2022)), who conducted qualitative analyses of DALL-E 2’s generative capabilities on challenging text prompts. The authors empirically demonstrated that DALL-E 2 produces high-quality images for simple prompts but is often unable to understand entity relations, numbers, negations, and common sense in complex settings.

3.2 Quantifying the Influence of Homoglyphs and non-Latin Characters

Image 6: Refer to caption

Figure 3: The computation of our Relative Bias metric is done in four steps: 1.) An example prompt is taken from the dataset, and two variations of it are formed: one with only Latin characters, the other with one non-Latin character added. 2.) Images are generated for both prompts. 3.) The cosine similarity between each image and the input prompt, which explicitly states the expected cultural association of the inserted homoglyph, is computed. 4.) The Relative Bias is calculated as the percentage increase in cosine similarity.

We rely on three metrics to measure the cultural biases induced by homoglyhps and other non-Latin characters, namely the Relative Bias and VQA Score, two novel metrics to quantify how biased the generated images are on average, and the Word Embedding Association Test (WEAT)(citep@BBN(Caliskan et al.,, 2017)) for biases in text embeddings. For the first two metrics, we created three prompt datasets describing general concepts that are usually influenced by local cultures, namely People, Buildings, and Misc. The People dataset contains generic prompts that describe images of people and aims to check the effects on their appearance. The Buildings dataset provides textual descriptions of landmarks and architectural styles. The Misc dataset comprises prompts of various concepts that might reflect local culture, including clothing, food, and religion. Each dataset consists of ten different prompts, each containing a placeholder, e.g., "A small town"; see Section B.1 for an overview of the various prompts. We generated multiple images x 𝑥 x italic_x for each prompt z 𝑧 z italic_z, once with the removed and once replaced by the character for which we want to measure its bias. In this setting, the non-Latin characters can be interpreted as adjectives adding implicit cultural features. Unlike the setting inFig.1, we did not replace any other parts or characters of the prompts to avoid additional influences on the metrics by removing or replacing parts of a sentence. We denote the generated images based on Latin-only prompts as x 𝑥 x italic_x and the ones with the non-Latin character inserted as x~~𝑥\tilde{x}over~ start_ARG italic_x end_ARG. For Stable Diffusion, the images with and without homoglyphs are generated with the same seed.

To measure the Relative Bias, we used a pre-trained CLIP model, namely OpenCLIP ViT-H/14(citep@BBN(Ilharco et al.,, 2021)), and computed the similarity of each generated image with its corresponding prompt z 𝑧 z italic_z. Here, we replaced the in the prompts with the adjective of the culture we expect to be associated with the non-Latin character’s underlying script, e.g., Greek in the case of an omicron. We chose the OpenCLIP model trained on the LAION-2B English dataset(citep@BBN(Schuhmann et al.,, 2022)) to avoid interdependent effects with the text encoders based on OpenAI’s CLIP ViT-L/14, which was trained on a non-public, smaller dataset with 400M samples(citep@BBN(Radford et al.,, 2021)). Be S c⁢(x,z)=E⁢(x)⋅E⁢(z)‖E⁢(x)‖⁢‖E⁢(z)‖subscript 𝑆 𝑐 𝑥 𝑧⋅𝐸 𝑥 𝐸 𝑧 norm 𝐸 𝑥 norm 𝐸 𝑧\mathit{S_{c}}(x,z)=\frac{E(x)\cdot E(z)}{|E(x)||E(z)|}italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_x , italic_z ) = divide start_ARG italic_E ( italic_x ) ⋅ italic_E ( italic_z ) end_ARG start_ARG ∥ italic_E ( italic_x ) ∥ ∥ italic_E ( italic_z ) ∥ end_ARG the cosine similarity between CLIP embeddings E 𝐸 E italic_E of image x 𝑥 x italic_x and text prompt z 𝑧 z italic_z. To quantify how a single character biases the generation toward its associated culture for N 𝑁 N italic_N prompts, we compute its Relative Bias as

𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒⁢𝐵𝑖𝑎𝑠=1 N⁢∑i=1 N S c⁢(x i,z i)−S c⁢(x i,z i)S c⁢(x i,z i).𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝐵𝑖𝑎𝑠 1 𝑁 superscript subscript 𝑖 1 𝑁 subscript 𝑆 𝑐subscript 𝑥 𝑖 subscript 𝑧 𝑖 subscript 𝑆 𝑐 subscript 𝑥 𝑖 subscript 𝑧 𝑖 subscript 𝑆 𝑐 subscript 𝑥 𝑖 subscript 𝑧 𝑖\mathit{Relative,Bias}=\frac{1}{N}\sum_{i=1}^{N}\frac{S_{c}(\tilde{x_{i}},z_{% i})-S_{c}(x_{i},z_{i})}{S_{c}(x_{i},z_{i})}\ .italic_Relative italic_Bias = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT divide start_ARG italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( over~ start_ARG italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG , italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) - italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG start_ARG italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG .(1)

Fig.3 illustrates the concept behind the Relative Bias for a single example. The Relative Bias quantifies the relative increase in similarity between the given prompt z i subscript 𝑧 𝑖 z_{i}italic_z start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT that explicitly states the culture and the generated images x i subscript 𝑥 𝑖 x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and xi subscript𝑥 𝑖\tilde{x}_{i}over~ start_ARG italic_x end_ARG start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT with and without the non-Latin character included in the text prompt. A higher Relative Bias indicates a stronger connection between this character and the associated culture. For example, a Relative Bias of 50% means that the cosine similarity between the prompt implying the culture and the N 𝑁 N italic_N images generated based on prompts with the associated character is 50% higher on average than for images generated with Latin-only prompts. We generated a hundred images for each of the prompt-character combinations on Stable Diffusion and four images on DALL-E 2 and computed the mean Relative Bias for all image-text pairs.

Building upon the Relative Bias is our VQA Score, which uses BLIP-2(citep@BBN(Li et al.,, 2023)) for visual question-answering. We feed the same images generated for the Relative Bias into BLIP-2 and ask if the model recognizes specific cultural characteristics in the images. For example, to check if an African homoglyph influences the appearance of people, we ask the model: Do the depicted people have African appearance? We then compute the VQA Score as the ratio in which the model answers yes to this question:

𝑉𝑄𝐴⁢𝑆𝑐𝑜𝑟𝑒=1 N⁢∑i=1 N 𝟙⁢[C⁢(x i,q)=yes].𝑉𝑄𝐴 𝑆𝑐𝑜𝑟𝑒 1 𝑁 superscript subscript 𝑖 1 𝑁 1 delimited-[]𝐶 subscript 𝑥 𝑖 𝑞 yes\mathit{VQA,,Score}=\frac{1}{N}\sum_{i=1}^{N}\mathbbm{1}\left[C(x_{i},q)=% \text{yes}\right].italic_VQA italic_Score = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT blackboard_1 [ italic_C ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_q ) = yes ] .(2)

Here, C⁢(x,q)𝐶 𝑥 𝑞 C(x,q)italic_C ( italic_x , italic_q ) denotes the answer of the BLIP-2 model for input image x 𝑥 x italic_x and question q 𝑞 q italic_q. 𝟙 1\mathbbm{1}blackboard_1 is the indicator function and returns 1 1 1 1 if the model answers the question with yes. By comparing this ratio to the VQA Score for images generated without homoglyphs, we can measure to which extent homoglyphs are biasing towards a certain culture. For example, a VQA Score of 75%percent 75 75%75 % for images generated with an African homoglyph means that the model recognized people with African appearance in 75%percent 75 75%75 % of the cases. The specific questions used to query BLIP-2 are stated in Section B.2.

To further quantify the biasing effects of single characters in the text embeddings, we adapted the Word Embedding Association Test (WEAT) proposed by (citet@BBN Caliskan et al., (2017)). WEAT is a statistical permutation test based on the Implicit Association Test from psychology research(citep@BBN(Greenwald et al.,, 1998)). The test is built around two sets of attribute words, denoted as A,B 𝐴 𝐵 A,B italic_A , italic_B, and two sets of target words, denoted as X,Y 𝑋 𝑌 X,Y italic_X , italic_Y. In its traditional application, attribute words might be, for example, gender-related terms like (man, male) and (woman, female). For our purposes, we interpret the attribute words as sets of characters from two different Unicode scripts, e.g., the Latin and Greek scripts. Target words in the gender example might be (programmer, astronaut) and (nurse, teacher). For our case, we used target words associated with specific cultures, e.g., (Western, American) and (Greek, Greece). See Section B.3 for a complete overview of all characters and keywords used to perform the tests. We note that there are not enough homoglyphs in the various scripts, so not all characters used in the attribute sets have a similarly-looking Latin counterpart. However, since text encoders work with the character encodings and not their visual appearance, this fact does not limit the informative value of the test.

The WEAT test statistic is then computed as follows:

s⁢(X,Y,A,B)=∑x∈X s⁢(x,A,B)−∑y∈Y s⁢(y,A,B),𝑠 𝑋 𝑌 𝐴 𝐵 subscript 𝑥 𝑋 𝑠 𝑥 𝐴 𝐵 subscript 𝑦 𝑌 𝑠 𝑦 𝐴 𝐵 s(X,Y,A,B)=\sum_{x\in X},s(x,A,B)-\sum_{y\in Y},s(y,A,B)\ ,italic_s ( italic_X , italic_Y , italic_A , italic_B ) = ∑ start_POSTSUBSCRIPT italic_x ∈ italic_X end_POSTSUBSCRIPT italic_s ( italic_x , italic_A , italic_B ) - ∑ start_POSTSUBSCRIPT italic_y ∈ italic_Y end_POSTSUBSCRIPT italic_s ( italic_y , italic_A , italic_B ) ,(3)

where s⁢(w,A,B)𝑠 𝑤 𝐴 𝐵 s(w,A,B)italic_s ( italic_w , italic_A , italic_B ) measures the association of a word w 𝑤 w italic_w with the attributes of A 𝐴 A italic_A and B 𝐵 B italic_B by computing

s⁢(w,A,B)=𝑚𝑒𝑎𝑛 a∈A⁢S c⁢(w,a)−𝑚𝑒𝑎𝑛 b∈B⁢S c⁢(w,b).𝑠 𝑤 𝐴 𝐵 subscript 𝑚𝑒𝑎𝑛 𝑎 𝐴 subscript 𝑆 𝑐 𝑤 𝑎 subscript 𝑚𝑒𝑎𝑛 𝑏 𝐵 subscript 𝑆 𝑐 𝑤 𝑏 s(w,A,B)=\mathit{mean}{a\in A},S{c}(w,a)-\mathit{mean}{b\in B},S{c}(w,b)\ .italic_s ( italic_w , italic_A , italic_B ) = italic_mean start_POSTSUBSCRIPT italic_a ∈ italic_A end_POSTSUBSCRIPT italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_w , italic_a ) - italic_mean start_POSTSUBSCRIPT italic_b ∈ italic_B end_POSTSUBSCRIPT italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_w , italic_b ) .(4)

Here, S c subscript 𝑆 𝑐 S_{c}italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT describes the cosine similarity between the text embeddings of two words. WEAT tests the null hypothesis that there is no difference between the two target sets regarding their cosine similarity to the two attribute sets. The effect size d 𝑑 d italic_d is measured as the number of standard deviations that separate the target words in X,Y 𝑋 𝑌 X,,Y italic_X , italic_Y with respect to their association with the attribute words A,B 𝐴 𝐵 A,,B italic_A , italic_B. A higher positive effect size indicates a stronger connection between characters and words in A 𝐴 A italic_A and X 𝑋 X italic_X and in B 𝐵 B italic_B and Y 𝑌 Y italic_Y, respectively, and therefore a larger bias. It is computed as follows:

d=𝑚𝑒𝑎𝑛 x∈X⁢s⁢(x,A,B)−𝑚𝑒𝑎𝑛 y∈Y⁢s⁢(y,A,B)𝑠𝑡𝑑 w∈X∪Y⁢s⁢(w,A,B).𝑑 subscript 𝑚𝑒𝑎𝑛 𝑥 𝑋 𝑠 𝑥 𝐴 𝐵 subscript 𝑚𝑒𝑎𝑛 𝑦 𝑌 𝑠 𝑦 𝐴 𝐵 subscript 𝑠𝑡𝑑 𝑤 𝑋 𝑌 𝑠 𝑤 𝐴 𝐵 d=\frac{\mathit{mean}{x\in X},s(x,A,B)-\mathit{mean}{y\in Y},s(y,A,B)}{% \mathit{std}_{w\in X\cup Y},s(w,A,B)}\ .italic_d = divide start_ARG italic_mean start_POSTSUBSCRIPT italic_x ∈ italic_X end_POSTSUBSCRIPT italic_s ( italic_x , italic_A , italic_B ) - italic_mean start_POSTSUBSCRIPT italic_y ∈ italic_Y end_POSTSUBSCRIPT italic_s ( italic_y , italic_A , italic_B ) end_ARG start_ARG italic_std start_POSTSUBSCRIPT italic_w ∈ italic_X ∪ italic_Y end_POSTSUBSCRIPT italic_s ( italic_w , italic_A , italic_B ) end_ARG .(5)

3.3 Fine-Tuning Text Encoders with Homoglyph Unlearning

Image 7: Refer to caption

Figure 4: Visualization of our proposed homoglyph unlearning procedure. An already trained text encoder E i⁢n⁢v subscript 𝐸 𝑖 𝑛 𝑣 E_{inv}italic_E start_POSTSUBSCRIPT italic_i italic_n italic_v end_POSTSUBSCRIPT is fine-tuned to minimize the embedding similarity between prompts containing homoglyphs and their Latin-only counterpart. A copy of the initial model with frozen weights is used as a teacher model to guide the optimization.

Before presenting our empirical findings, let us explore how to eliminate the biasing behavior of specific homoglyphs. The biasing behavior of non-Latin characters can be seen as a feature, but might also be misused to create harmful stereotypical images. We discuss this polarity of homoglyph-induced biases in further detail in Sec.5.1. If users want to remove the influence of homoglyphs in their application to reduce the risk of harmful impacts or any other reasons, we offer a fast and computational cheap homoglyph unlearning approach. Text-to-image synthesis models usually rely on separately trained text encoders to preprocess the input prompts and guide the generation process on these encodings. We expect these text encoders to react sensitively to character encodings, which biases the image generations when non-Latin characters are present. To address this issue, it is reasonable and more cost-effective to modify only the text encoder rather than the entire generative model. To eliminate the biasing behavior of specific homoglyphs, we propose a novel approach that updates the weights of an already trained encoder. Although robust model behavior against different character encodings could also be included in the encoder’s initial training, such approaches have two drawbacks. First, robust model training complicates the training procedure, may hurt the model’s performance, and could make the convergence process unstable. Second, a freshly trained text encoder almost certainly computes different embeddings than the current encoder used to guide the image generation. As a result, the generative model would also need to be re-trained or at least adapted to the new embeddings. We note that simply restricting model inputs to Latin characters as a solution can avoid character-induced biases in interfaces like DALL-E 2. However, limiting the input characters to a single script prevents users from describing concepts from their local scripts that could not be described analogously with purely Latin characters, and therefore, such an approach excludes some cultural concepts. Also, for models deployed locally without an API layer, this approach could easily be circumvented.

Inspired by backdoor attacks on pre-trained text encoders(citep@BBN(Struppek et al.,, 2023)), we propose a novel fine-tuning strategy that enables an already trained text encoder to learn to map a set of homoglyphs H 𝐻 H italic_H to their Latin counterparts to make the model invariant to these characters. Our method, which is illustrated in Fig.4, starts with two text encoder models, E 𝐸 E italic_E and E 𝑖𝑛𝑣 subscript 𝐸 𝑖𝑛𝑣 E_{\mathit{inv}}italic_E start_POSTSUBSCRIPT italic_inv end_POSTSUBSCRIPT, both initialized with the same pre-trained encoder weights used by the generative model. We then only update the weights of E 𝑖𝑛𝑣 subscript 𝐸 𝑖𝑛𝑣 E_{\mathit{inv}}italic_E start_POSTSUBSCRIPT italic_inv end_POSTSUBSCRIPT to make it invariant against certain homoglyphs and keep the weights of E 𝐸 E italic_E fixed. In order to do this, we employ a teacher-student approach and minimize the following loss function:

ℒ 𝑢𝑛𝑙𝑒𝑎𝑟𝑛𝑖𝑛𝑔=1|B|⁢∑z∈B S c⁢(E⁢(z),E 𝑖𝑛𝑣⁢(z))+∑h∈H 1|B h|⁢∑z′∈B h S c⁢(E⁢(z′),E 𝑖𝑛𝑣⁢(z′⊕h)).subscript ℒ 𝑢𝑛𝑙𝑒𝑎𝑟𝑛𝑖𝑛𝑔 1 𝐵 subscript 𝑧 𝐵 subscript 𝑆 𝑐 𝐸 𝑧 subscript 𝐸 𝑖𝑛𝑣 𝑧 subscript ℎ 𝐻 1 subscript 𝐵 ℎ subscript superscript 𝑧′subscript 𝐵 ℎ subscript 𝑆 𝑐 𝐸 superscript 𝑧′subscript 𝐸 𝑖𝑛𝑣 direct-sum superscript 𝑧′ℎ\mathcal{L_{\mathit{unlearning}}}=\frac{1}{|B|}\sum_{z\in B}\shortminus S_{c}% \left(E(z),E_{\mathit{inv}}(z)\right)+\sum_{h\in H}\frac{1}{|B_{h}|}\sum_{z^{% \prime}\in B_{h}}\shortminus S_{c}\left(E(z^{\prime}),E_{\mathit{inv}}(z^{% \prime}\oplus h)\right).caligraphic_L start_POSTSUBSCRIPT italic_unlearning end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG | italic_B | end_ARG ∑ start_POSTSUBSCRIPT italic_z ∈ italic_B end_POSTSUBSCRIPT italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_E ( italic_z ) , italic_E start_POSTSUBSCRIPT italic_inv end_POSTSUBSCRIPT ( italic_z ) ) + ∑ start_POSTSUBSCRIPT italic_h ∈ italic_H end_POSTSUBSCRIPT divide start_ARG 1 end_ARG start_ARG | italic_B start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT | end_ARG ∑ start_POSTSUBSCRIPT italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ italic_B start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT ( italic_E ( italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) , italic_E start_POSTSUBSCRIPT italic_inv end_POSTSUBSCRIPT ( italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ⊕ italic_h ) ) .(6)

Here, S c subscript 𝑆 𝑐 S_{c}italic_S start_POSTSUBSCRIPT italic_c end_POSTSUBSCRIPT denotes the cosine similarity between the text embeddings computed by the two encoders. In each step, prompt batches B 𝐵 B italic_B and B h subscript 𝐵 ℎ B_{h}italic_B start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT are sampled from a suitable English text dataset. The first term ensures that for prompts z∈B 𝑧 𝐵 z\in B italic_z ∈ italic_B, the computed embeddings of E 𝑖𝑛𝑣 subscript 𝐸 𝑖𝑛𝑣 E_{\mathit{inv}}italic_E start_POSTSUBSCRIPT italic_inv end_POSTSUBSCRIPT are close to the embeddings of E 𝐸 E italic_E and that the general utility of the encoder is preserved. The second term updates E i⁢n⁢v subscript 𝐸 𝑖 𝑛 𝑣 E_{inv}italic_E start_POSTSUBSCRIPT italic_i italic_n italic_v end_POSTSUBSCRIPT to map embeddings for prompts containing homoglyph h∈H ℎ 𝐻 h\in H italic_h ∈ italic_H to the corresponding embedding for their Latin counterpart. The operator ⊕direct-sum\oplus⊕ denotes the replacement of a single pre-defined Latin character in a prompt z′∈B h superscript 𝑧′subscript 𝐵 ℎ z^{\prime}\in B_{h}italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ italic_B start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT by its corresponding homoglyph h ℎ h italic_h, e.g., a randomly selected Latin Image 8: [Uncaptioned image] in z′superscript 𝑧′z^{\prime}italic_z start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT is replaced by a Greek Image 9: [Uncaptioned image]. Therefore, the encoder learns to interpret homoglyphs the same way as their Latin counterparts and maps a prompt containing homoglyphs to the same embedding as if the prompt had been written using only Latin characters.

4 Manipulating the Image Generation with Homoglyphs

We now empirically explore the effects of homoglyphs and non-Latin characters in general on text-to-image synthesis. We start our investigation of cultural biases with a qualitative and quantitative evaluation in Section 4.1. We then identify in Section 4.2 a generative model’s text encoder as the main source for this biased behavior. Eventually, we demonstrate in Section 4.3 that our proposed homoglyph unlearning procedure successfully improves a model’s robustness against homoglyph manipulations without hurting its overall capabilities. While we focus in this section on the general biasing effects of characters, we provide a more nuanced discussion of the social impact and ethical considerations in Section 5.

4.1 Inducing Cultural Biases into the Image Generation Process

DALL-E 2

Image 10: Refer to caption

Image 11: Refer to caption

Image 12: Refer to caption

Image 13: Refer to caption

(a)Latin Image 14: Refer to caption (U+0041)

Image 15: Refer to caption

Image 16: Refer to caption

Image 17: Refer to caption

Image 18: Refer to caption

(b)Greek Image 19: Refer to caption (U+0391)

(c)Scandinavian Image 20: Refer to caption (U+00C5)

Image 21: Refer to caption

Image 22: Refer to caption

Image 23: Refer to caption

Image 24: Refer to caption

Image 25: Refer to caption

Image 26: Refer to caption

Image 27: Refer to caption

Image 28: Refer to caption

(d)Latin Image 29: Refer to caption (U+006F)

Image 30: Refer to caption

Image 31: Refer to caption

Image 32: Refer to caption

Image 33: Refer to caption

(e)Greek Image 34: Refer to caption (U+03BF)

Image 35: Refer to caption

Image 36: Refer to caption

Image 37: Refer to caption

Image 38: Refer to caption

(g)Latin Image 39: Refer to caption (U+006F)

Image 40: Refer to caption

Image 41: Refer to caption

Image 42: Refer to caption

Image 43: Refer to caption

(h)Korean Image 44: Refer to caption (U+3147)

Image 45: Refer to caption

Image 46: Refer to caption

Image 47: Refer to caption

Image 48: Refer to caption

(i)African Image 49: Refer to caption (U+1ECD)

(c)Scandinavian Image 50: Refer to caption (U+00C5)

DALL-E 2

Stable Diffusion

Image 51: Refer to caption

Image 52: Refer to caption

Image 53: Refer to caption

Image 54: Refer to caption

(f)Korean Image 55: Refer to caption (U+3147)

Figure 5: Examples of induced biases with a single homoglyph replacement. We queried DALL-E 2 with "A city in bright sunshine" (top row) and "Delici o us food on a table" (middle row), and Stable Diffusion with "A photo o f an actress" (bottom row). Each query differs only by the underlined characters A and o, respectively. Most inserted homoglyphs are visually barely distinguishable and are rendered very similarly to their Latin counterparts in APIs.

We first qualitatively demonstrate the effects of homoglyphs injected into subordinate words for image generations with DALL-E 2 and Stable Diffusion v1.5. We focus on single characters within words that are not crucial to the overall image content, such as articles or prepositions. By this, we demonstrate the intriguing effect that homoglyphs induce cultural biases and implicitly guide the image generation accordingly without changing the meaning of the prompt or explicitly defining any additional cultural attributes in the query.

For a qualitative evaluation, the top row of Fig.5 illustrates the biases induced into DALL-E 2 by replacing an article in the generic description of a city with a Greek and Scandinavian homoglyph, respectively. Whereas the unmodified prompt with Latin-only characters generates city images of various architectural styles, inserting the Greek Image 56: Refer to caption (U+0391) generates images of cities with traditional Greek architectural features. Two of the results even look like Athens with its Mount Lycabettus visible. For the Scandinavian character Image 57: [Uncaptioned image] (U+00C5), the images depict small and colorful houses located by the water, a characteristic appearance of Scandinavian cities like Trondheim or Bergen. The middle row of Fig.5 depicts results of DALL-E 2 for the non-sensitive domain of food, and the bottom row outputs of Stable Diffusion for the arguably more sensitive domain of female-looking faces. Again, inserting only a single homoglyph already strongly biases the image generation and nearly all generated images depict cultural characteristics. Biasing the models with single homoglyph replacements can be used in various contexts, as additional examples in Section D and Section E demonstrate.

Overall, we found that both models behave similarly in the face of homoglyph replacements and integrate cultural biases into their generated images. However, the induced biases are sometimes less clearly depicted in images generated by Stable Diffusion compared to the results on DALL-E 2. We further quantified the biasing effects on Stable Diffusion v1.5 in Fig.6 and Fig.7 with our Relative Bias and VQA Score metrics, respectively, and five homoglyphs. We inserted the non-Latin characters between words of the prompts and did not replace existing characters to avoid additional confounding factors in the metric computation. The results show that different homoglyphs trigger biases in different domains. For example, the Greek homoglyph mainly influences the generation of buildings, which is to be expected since the Greek architectural style offers strong influences from Ancient Greece. Similarly, the Korean and African homoglyphs have a strong impact on the visual appearance of people but also markedly influence other domains. Whereas the Arabic homoglyph induces biases in all three domains, the Cyrillic homoglyph induces overall lower but still noticeable biases.

Image 58: Refer to caption

Figure 6: Relative Bias measured for five homoglyphs from different scripts on Stable Diffusion v1.5. The light bars state the results for the standard text encoder. The dark bars indicate the results after performing our homoglyph unlearning procedure on a single encoder for the five homoglyphs. As is evident, the homoglyph unlearning removes successfully nearly all the biasing behavior.

Image 59: Refer to caption

Figure 7: VQA Score measured for five homoglyphs from different scripts on Stable Diffusion v1.5 without homoglyph unlearning. The score is stated for images generated with Latin-only prompts (dark bars) and prompts that contain a single homoglyph (light bars). Overall, the results are consistent with the patterns indicated by Relative Bias.

In most cases, the homoglyphs noticeably influence the generated images. However, the occurring biases can not always be clearly described and assigned to a specific culture and are sometimes more subtle, such as effects on color schemes or environments, and therefore, hard to quantify. We repeated the Relative Bias and VQA Score computation using DALL-E 2, other versions of Stable Diffusion, namely v1.4 and v2.1, and the multilingual AltDiffusion-m18, and present the results in Section C.1 and Section C.2, respectively. To verify that the choice of adjectives describing the individual cultures is flexible, we repeated the experiments for the African homoglyph and used adjectives corresponding to the largest African countries instead of the general African adjective. The resulting values, which we also depict in Section C.1, confirm that the adjective choice usually does not change the general bias patterns.

A reliable measurement of the metrics on DALL-E 2 is currently not possible since the API does not support deterministic image generations with seeds and, therefore, might generate images of significantly different styles and content for the same prompt. Mitigating influences due to the randomness of the process would require generating numerous images for each prompt, which is, in turn, cost-intensive. However, the results for DALL-E 2 still draw a similar picture compared to our experiments on Stable Diffusion, but the range and variance of the values are much higher.

We further found the biases to be stronger and clearer from those homoglyphs that relate to a more narrowly defined culture, such as characters from the Greek script, which are limited to the Greek language spoken in Greece and Cyprus. In contrast, the character Image 60: [Uncaptioned image] (U+1ECD) is part of the Vietnamese language as well as the International African Alphabet used by various African languages. Therefore, this homoglyph induces Vietnamese biases into DALL-E 2, but images generated by Stable Diffusion reflect African culture. Thus, the same characters of a script can affect the computed text embeddings and the corresponding images quite differently when the characters are used in several cultural settings. However, the biasing effects are still present in the models. We refer to Appendices D and E for a larger collection of visual examples generated with DALL-E 2 and Stable Diffusion, respectively.

4.2 Text Encoders Are the Driving Force behind Homoglyph-Induced Biases

Image 61: Refer to caption

(a)t-SNE visualization of character embeddings computed by CLIP.

Image 62: Refer to caption

(b)Embedding differences between characters point to a script’s cultural direction and enables cultural guidance in the embedding space.

Figure 8: The CLIP text encoder recognizes different scripts and projects their characters into separate areas of the embedding space, as the t-SNE plot in Fig.7(a) illustrates. To further demonstrate the biasing effects, we can add the embedding differences between Latin and non-Latin characters to the text embedding of Stable Diffusion to induce cultural biases without changing the textual description. We illustrate this in Fig.7(b) and provide additional results in Section E.4.

Next, we explore the reasons behind the biasing behavior of homoglyphs and non-Latin characters in general. We expect the models’ text encoders to be the main biasing factor, since their interpretations of distinct non-Latin characters in the embedding space might be linked to specific cultures. To verify this assumption, we analyzed the embedding space of the CLIP text encoder, which is used by both DALL-E 2 and Stable Diffusion.

As a first step, we computed the text embeddings for various Latin and non-Latin characters and visualized those in a t-SNE(citep@BBN(van der Maaten & Hinton,, 2008)) plot in Fig.7(a). Characters from different scripts are clustered together, which means that the text encoder is able to distinguish characters from specific scripts and reflects these differences in its computed embeddings. We exploited this fact and computed cultural directions as the difference between embeddings for Latin and non-Latin characters. We then added these embedding directions to the embedding of a standard English text prompt. Fig.7(b) demonstrates the general principle and some results for inducing Korean and Arabic biases. The added embedding shift induces similar cultural biases as our previous experiments with homoglyphs included in the text prompts. We conclude that the added directions based on the non-Latin characters point towards the cultures associated with the scripts and confirm our assumption that the text encoder is indeed the driving force behind the biasing behavior. Section E.4 provides more samples generated with embedding manipulations. In order to statistically evaluate the hypothesis that characters from distinct scripts are associated with specific cultures, we further conducted the WEAT association test for word embeddings, as described in Section 3.2.

Table 1: WEAT hypothesis test p 𝑝 p italic_p-values and effect sizes d 𝑑 d italic_d for characters from five non-Latin scripts. The results for the standard CLIP encoder (CLIP ViT-L/14) indicate strong and significant biasing effects with all p 𝑝 p italic_p-values p<0.025 𝑝 0.025 p<0.025 italic_p < 0.025 and, except for African characters, even p<0.01 𝑝 0.01 p<0.01 italic_p < 0.01. For the multilingual CLIP (M-CLIP) encoder, WEAT states no significant biasing behavior.

The WEAT for the CLIP ViT-L/14 text encoder of Stable Diffusion v1.5 and characters from five scripts (Greek, Cyrillic, Arabic, Korean, and African) are presented in Table 1. In all five cases, a strong biasing effect, as measured by effect size d 𝑑 d italic_d, is evident and statistically significant, as supported by the low p 𝑝 p italic_p-values. The Greek, Cyrillic, and Arabic scripts exhibit the strongest biasing effects, while characters from the African script show a lower but still significant effect size. We assume that this is due to the fact that the characters investigated are not exclusively used by African languages, and thus other biasing influences may be present.

We further wanted to assess if the same biasing effects are still present for text encoders explicitly trained on multilingual data. For this case, we repeated the WEAT computation on a multilingual CLIP encoder (M-CLIP)(citep@BBN(Carlsson et al.,, 2022)) trained on data from a hundred different languages. As the results in Table 1 demonstrate, the multilingual encoder shows no significant biasing behavior. We, therefore, conclude that explicitly training on multilingual data might mitigate biasing behaviors of homoglyphs compared to training on primarily English texts that occasionally contain non-English characters or words.

While using a multilingual text encoder like M-CLIP in combination with Stable Diffusion is a promising avenue to overcome undesired character biases, the text encoder cannot be simply replaced by the M-CLIP encoder due to mismatching embedding spaces. However, training diffusion models around multilingual encoders offers an interesting avenue for future research but requires vast amounts of computing capacity and cannot be realized offhand. To still show that multilingual data indeed mitigates the influence of specific character encodings, we repeated the Relative Bias computation on the recent AltDiffusion-m18(citep@BBN(Ye et al.,, 2023)), a diffusion model conceptually identical to Stable Diffusion but supporting 18 different languages, including Korean, Arabic, and Russian. The results, which we state in Section C, demonstrate that the model indeed exhibits significantly lower Relative Bias scores and supports our assumption that training on multilingual data successfully mitigates the character-induced biases.

Overall, transformer-based language models are well-known for their ability to learn the intricacies of language when provided with ample capacity and a sufficient amount of training data(citep@BBN(Radford et al.,, 2018)). Therefore, it is reasonable that text encoders in multimodal systems are able to learn the nuances of various cultural influences from a relatively small number of training samples. Diffusion models, on the other hand, provide strong mode coverage and sample diversity(citep@BBN(Nichol & Dhariwal,, 2021)), which allows for the generation of images that reflect the various cultural biases encoded in text embeddings. The interaction of both components plays a crucial role in explaining the culturally influenced behavior of the investigated models in the presence of homoglyphs.

4.3 Increasing the Robustness of Text Encoders with Homoglyph Unlearning

After identifying the text encoder as the main reason for the biasing effects, we next demonstrate the effectiveness of our homoglyph unlearning procedure to mitigate biases induced by homoglyphs. Homoglyph unlearning allows a user to remove the biasing effects of a set of homoglyphs and updates the encoder to interpret the characters like their Latin counterparts. We evaluated its effectiveness on the CLIP ViT-L/14 text encoder as part of Stable Diffusion v1.5. As a dataset with English prompts, we took the text samples from the LAION-Aesthetics v2 6.5+ dataset(citep@BBN(Schuhmann et al.,, 2022)) and skipped samples containing the homoglyphs we want to unlearn. We then fine-tuned the pre-trained CLIP encoder for 500 steps. During each step, we sampled 128 Latin-only prompts B 𝐵 B italic_B to compute the first term of the loss function and maintain general usability. We further sampled an additional set B h subscript 𝐵 ℎ B_{h}italic_B start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT of 128 prompts for each of the five homoglyphs h∈H ℎ 𝐻 h\in H italic_h ∈ italic_H stated in Fig.6 and replaced a single Latin Image 63: [Uncaptioned image] in each prompt with its homoglyph counterpart h ℎ h italic_h. S ee Section B for more detailed training hyperparameters. It is important to note that our focus is on unlearning homoglyphs, which are characters that have a similar appearance to their Latin counterparts, rather than non-Latin characters in general. The approach is quite fast and takes only about 25 minutes on a single NVIDIA A100-80GB.

To quantify the success of the approach, we again computed the Relative Bias with the updated text encoder after the homoglyph unlearning process. The results are depicted in Fig.6 by the dark bars and demonstrate that the homoglyph unlearning procedure successfully removes almost all of their biasing behavior, without hurting the general image quality. Only in some cases, e.g., for the African Image 64: [Uncaptioned image], some small effects remain present. However, compared to the standard text encoder, the overall relative distortion has been drastically reduced.

Table 2: FID scores and zero-shot ImageNet accuracies for the standard encoder and the encoder after the homoglyph unlearning procedure was performed. Both metrics underline that the homoglyph unlearning does not hurt the model’s utility, e.g., the ImageNet top-1 accuracy only decreases by about 1 percentage point (pp).

To ensure that the homoglyph unlearning procedure does not hurt the encoder’s utility, we computed the FID score(citep@BBN(Heusel et al.,, 2017)) on MS-COCO 2014(citep@BBN(Lin et al.,, 2014)) to measure the generated image’s fidelity. We follow the standard evaluation protocol for text-to-image models. We further computed the encoder’s zero-shot prediction performance on the common ImageNet benchmark(citep@BBN(Deng et al.,, 2009)). For this, we coupled the updated encoder with the corresponding CLIP image encoder and followed the standard evaluation procedure from literature(citep@BBN(Radford et al.,, 2021)). More details are provided at Section B. The results in Table 2 demonstrate that the unlearning approach only marginally influences the encoder’s behavior. The FID score increased by 0.17, and the top-1 ImageNet accuracy decreased by only 1.16 percentage points. To provide a qualitative check, we randomly sampled images generated for the FID computation and compared the results with the encoder before and after the homoglyph unlearning. Images are depicted in Section E.6. Overall, the updated model retains the same image quality after the unlearning procedure and only small feature variations are apparent in the generated images.

Although some disparities may exist between images generated with and without homoglyphs, it is important to note that these disparities do not reflect cultural biases anymore but rather slight variations in the representation of the same image content. In summary, homoglyph unlearning is able to mitigate the sensitivity of pre-trained text encoders to homoglyphs while maintaining the model utility and image quality and without requiring full re-training.

We expect our approach to be directly applicable to other models relying on CLIP, including DALL-E 2. Moreover, a revised text encoder after the unlearning procedure can simply be plugged into any application based around the same encoder model before the weight updates, since the computed embeddings stay close to the initial ones. This allows one to use the fine-tuned encoder, e.g., for an updated version of Stable Diffusion or other applications such as image retrieval(citep@BBN(Beaumont,, 2021)) without any further adjustments required.

Image 65: Refer to caption

Figure 9: Images generated by Stable Diffusion v1.5 before and after applying Homoglyph Unlearning. The following prompts were used: "A photo o f a child" (left) and "A painting o f a historical site" (right). Each query differs only by the underlined o. The results demonstrate that homoglyph unlearning successfully removes the biasing effects of the homoglyphs and the homoglyph images look like their Latin-only counterparts without any degradation of image quality.

In addition to our homoglyph unlearning approach, we envision two basic approaches to avoid model biases by homoglyph injections. The first simple solution is a technical Unicode script detector built into the model API. For example, the API could scan each text input for any non-Latin characters or non-Arabic numbers and either block the queries or inform the user about the presence of such symbols. In addition, queries with homoglyphs detected could be purified by simple character mappings to valid characters. However, such approaches would generally prevent non-Latin inputs and make it impossible for people to define concepts from their own languages, such as names or places, if no Latin-written counterpart exists.

As a second solution, we propose to train the text encoders on multilingual data to make it more robust to different character encodings. As we demonstrated in Section 4.2, the multilingual M-CLIP model shows no statistically significant biasing behavior in the presence of homoglyphs and our results on AltDiffusion-m18 stated in Section C underline this assumption. We, therefore, assume that text encoders trained on multilingual data compute more stable embeddings for non-Latin characters, leading to more robust generations.

5 Discussion, Challenges, and Conclusion

We now further discuss the social impact of our findings, including possible malicious applications. We also raise the question of whether this model property is compellingly bad, and point out some limitations of our research.

Image 66: Refer to caption

Image 67: Refer to caption

Image 68: Refer to caption

Image 69: Refer to caption

(a)Latin Image 70: Refer to caption (U+006F)

Image 71: Refer to caption

Image 72: Refer to caption

Image 73: Refer to caption

Image 74: Refer to caption

(b)Korean Image 75: Refer to caption (U+3147)

Image 76: Refer to caption

Image 77: Refer to caption

Image 78: Refer to caption

Image 79: Refer to caption

(c)African Image 80: Refer to caption (U+1ECD)

Figure 10: Examples of potential misuse of homoglyph manipulations to change the depicted appearance of people. The images on the top are generated with the prompt "A police mugshot o f a man", the images on the bottom line with "A photo o f a construction worker". We then replaced only the underlined character with a homoglyph from the Korean and African script, respectively. Such manipulations could lead to the generation of harmful content, for example, construction workers, might always be depicted as people with dark skin tones.

5.1 Social Impact and Ethical Considerations

Building upon our initial definition of sensitive and non-sensitive biases, we will delve into the positive and negative implications arising from the models’ susceptibility to character encodings. It is important to underline that drawing a definitive line between harmful and benign applications is challenging, given that outcomes generated by the model can be interpreted in various manners based on individuals’ diverse backgrounds. In the subsequent sections, we will discuss both perspectives, detailing potential impacts, and addressing the dual use of homoglyph manipulations.

Homoglyph Manipulations Can Reinforce Stereotypes.

Our results from the previous section demonstrate that subtle character substitutions are sufficient to alter the presentation of sensitive image attributes, notably in the context of human appearances. This section shifts our attention towards the examination of sensitive biases that can potentially arise through the exploitation of homoglyphs in text-to-image systems. Homoglyph manipulations may build and reinforce stereotypes, which describe a widely held but fixed and oversimplified image or idea of a particular type of person or thing(citep@BBN(Bordalo et al.,, 2023)). For instance, consider the generation of images depicting construction workers, which are considered low-prestige professions(citep@BBN(Goyder & Frank,, 2007; Han et al.,, 2023)). In this case, a consistent portrayal of individuals with darker skin tones might be induced by surreptitiously injected homoglyphs.

To illustrate the practicability of such misuse, Fig.10 showcases generated images of both police mugshots and construction workers. Notably, these depictions have been manipulated through single homoglyph substitutions to alter the people’s appearances. These biased portrayals have the potential to create wrong perspectives on the world, fortify viewers’ implicit biases, and reinforce misguided beliefs that a single cultural background serves as the norm or representative standard. Such stereotypical representations can lead to a distorted global perspective that potentially hinders the promotion of cross-cultural understanding.

From an alternative perspective, homoglyph manipulations also have the capability to deliberately omit cultural diversities by forcing the generation to only represent certain cultures. By excluding other cultural contexts, the generative model inadvertently fosters sentiments of exclusion and marginalization among individuals not aligned with the showcased culture. This exclusionary practice contributes to a sense of inequality and inadequate representation, significantly affecting individuals belonging to underrepresented cultural groups. When defining model fairness as the absence of any prejudice or favoritism toward an individual or group based on their inherent or acquired characteristics(citep@BBN(Mehrabi et al.,, 2022)), both of the aforementioned circumstances hold the potential to promote model unfairness.

In this sense, we argue that using homoglyphs to manipulate text prompts creates, to some extent, a potential security breach in the realm of text-to-image synthesis. This vulnerability arises from the possibility that a malicious prompt tool or prompt database could deliberately infuse generated images with sensitive and generally undesired cultural stereotypes. It might be subtly achieved by strategically inserting homoglyphs within subordinate words or as supplementary inputs, all while remaining imperceptible to end users’ detection of textual alterations. With the widespread distribution of text-to-image models and their generated images over social media and other communication channels, stereotypical images could be introduced to a broad audience with manageable effort. As generative AI models become more prevalent across various domains, the inclusion of stereotypes in such models could significantly impact both users and model providers. One critical domain where character-induced biases can exert serious effects are multi-modal chat bots like GPT+DALL-E 3(citep@BBN(Betker et al.,, 2023)) and LLaVA-Interactive(citep@BBN(Chen et al.,, 2023)). Furthermore, entire industries, such as the film(citep@BBN(Heavenarchive,, 2023)) and video game sectors(citep@BBN(Liao,, 2023)) are increasingly incorporating text-guided generative AI tools.

Given that numerous of the mentioned applications are constructed around pre-trained text encoders, e.g., image retrieval systems(citep@BBN(Beaumont,, 2021)), we anticipate that these systems are similarly prone to susceptibility stemming from homoglyph manipulations. In the context of image retrieval, adversarial prompt manipulations might influence the retrieved image contents and skew it toward a certain culture. In all discussed scenarios, our homoglyph unlearning procedure stands as a pragmatic remedy, effectively counteracting undesirable bias effects introduced by the presence of homoglyphs. Yet, we also want to stress that the influences of non-Latin characters are not strictly negative – they can also serve as a way to represent local cultures within the generated images, an important point we explore in the next section.

It’s Not a Bug, It’s a Feature?

Models that undergo training on data with lack of diversity and narrow spectrum of representations are known to inherit the resulting biases. Within the context of text-to-image models, the vast datasets primarily consist of samples with English captions, resulting in a restriction on the inclusion of non-Western cultural depictions within the data. That is why text-to-image models like DALL-E 2 and Stable Diffusion favor the generation of images reflecting western culture, especially that of the United States(citep@BBN(Bianchi et al.,, 2023)).

Nevertheless, our demonstrations clearly show that including individual characters from non-Latin Unicode scripts has the remarkable ability to turn pre-existing Western biases toward alternative cultural spheres. This strategic integration of non-Latin characters facilitates the incorporation of features characteristic to different cultural backgrounds. It is highly questionable whether universal purpose models like DALL-E 2 should provide users with Western biases by default, regardless of the user’s individual cultural background. Inserting characters from their native language script into a prompt offers a simple approach to equip users with a technique to guide and customize the image generation process. Through this uncomplicated technique, users can effectively tailor the generated images to reflect their own cultural background. Such personalized adaptations encompass a wide spectrum of cultural elements, ranging from the appearances of individuals to architectural styles, religious symbolism, culinary dishes, clothing preferences, and many more.

Most biases introduced by non-Latin characters primarily impact usually non-sensitive aspects of culture, such as food or architectural styles. In effect, these biases exert a nominal influence over the portrayal of these concepts and are unlikely to be inherently harmful. From this vantage point, the biasing effects induced by non-Latin characters might be deemed advantageous, particularly within the context of models that retain a pronounced Western bias. This feature could prove desirable, in particular as long as the underlying models continue to exhibit these imbalances in favor of Western cultural norms. Nonetheless, the negative potential of script injection should still be kept in mind.

5.2 Challenges and Future Research

In this work, we focused our investigation on short prompt descriptions to ensure that the models are generally able to reflect the described concepts in the generated images. We note that with increasing prompt complexity, the biasing effects of non-Latin characters can decrease and might not be perceivable anymore. However, the insertion of multiple non-Latin characters can still partially increase the biasing effects. Also, the induced biases could be suppressed by strong, explicitly stated concepts, such as names of celebrities or attributes like hair color that interfere with certain cultural backgrounds. We show some examples of these interdependent effects in Section E.5.

Whereas we examined DALL-E 2 and Stable Diffusion as well-known representatives of text-to-image generation models, it remains to be empirically investigated whether other text-conditional image generation models, such as Google’s Parti(citep@BBN(Yu et al.,, 2022)) and Imagen(citep@BBN(Saharia et al.,, 2022)), or Meta’s Make-A-Scene(citep@BBN(Gafni et al.,, 2022)), exhibit similar behavior for non-Latin characters. Unfortunately, these models were not publicly available at the time of writing. We, therefore, leave the investigation of a wider variety of models to future research. However, the fact that these models were all trained to extract image semantics from large collections of written descriptions obtained on the internet, which almost certainly always contain non-Latin letters if not rigorously filtered, suggests that they tend to behave similarly.

5.3 Conclusion

We demonstrated that multimodal models implicitly pick up cultural characteristics and biases linked to various Unicode scripts when trained on huge datasets of image-text pairs from the internet. A single non-Latin character in the input prompt can already cause the process of generating images to reflect biases associated with the character’s script. Although this surprising model behavior provides valuable insights into the nuanced information learned from a model’s training data and offers an intriguing feature to allow users incorporating cultural influences, it may also be exploited by malicious actors to unnoticeably reinforce stereotypes in generated images. To address this issue, we proposed homoglyph unlearning, which enables users to make text encoders of generative models invariant to homoglyphs without requiring full retraining. We believe that our research will contribute to a better understanding of multimodal models and promote the creation of more robust and fair systems.

Reproducibility Statement

Acknowledgments

The authors thank Daniel Neider for fruitful discussions. This research has benefited from the Federal Ministry of Education and Research (BMBF) project KISTRA (reference no. 13N15343), the Hessian Ministry of Higher Education, Research, Science and the Arts (HMWK) cluster projects “The Third Wave of AI” and hessian.AI, from the German Center for Artificial Intelligence (DFKI) project “SAINT”, as well as from the joint ATHENE project of the HMWK and the BMBF “AVSV”.

References

  • American Psychological Association, (2023) American Psychological Association (2023). Apa dictionary of psychology. https://dictionary.apa.org. Accessed: 17-August-2023, Keywords: bias, stereotype.
  • Beaumont, (2021) Beaumont, R. (2021). Clip retrieval. https://github.com/rom1504/clip-retrieval. Accessed: 12-January-2023, version 2.34.2.
  • Betker et al., (2023) Betker, J., Goh, G., Jing, L., Brooks, T., Wang, J., Li, L., Ouyang, L., Zhuang, J., Lee, J., Guo, Y., Manassra, W., Dhariwal, P., Chu, C., Jiao, Y., & Ramesh, A. (2023). Improving image generation with better captions. Accessed: 20-November-2023.
  • Bianchi et al., (2023) Bianchi, F., Kalluri, P., Durmus, E., Ladhak, F., Cheng, M., Nozza, D., Hashimoto, T., Jurafsky, D., Zou, J., & Caliskan, A. (2023). Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. In Conference on Fairness, Accountability, and Transparency (FAccT) (pp.1493–1504).
  • Birhane et al., (2021) Birhane, A., Prabhu, V.U., & Kahembwe, E. (2021). Multimodal datasets: misogyny, pornography, and malignant stereotypes. arXiv preprint, arXiv:2110.01963.
  • Bordalo et al., (2023) Bordalo, P., Coffman, K., Gennaioli, N., & Shleifer, A. (2023). Stereotypes. https://scholar.harvard.edu/files/shleifer/files/stereotypes_june_6.pdf. Accessed: 18-August-2023, Keywords: bias, stereotype.
  • Boucher et al., (2022) Boucher, N., Shumailov, I., Anderson, R., & Papernot, N. (2022). Bad characters: Imperceptible NLP attacks. In Symposium on Security and Privacy (S&P) (pp.1987–2004).
  • Buyl et al., (2022) Buyl, M., Cociancig, C., Frattone, C., & Roekens, N. (2022). Tackling algorithmic disability discrimination in the hiring process: An ethical, legal and technical analysis. In Conference on Fairness, Accountability, and Transparency (FAccT) (pp.1071–1082).
  • Caliskan et al., (2017) Caliskan, A., Bryson, J.J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.
  • Carlini & Terzis, (2022) Carlini, N. & Terzis, A. (2022). Poisoning and backdooring contrastive learning. In International Conference on Learning Representations (ICLR).
  • Carlsson et al., (2022) Carlsson, F., Eisen, P., Rekathati, F., & Sahlgren, M. (2022). Cross-lingual and multilingual clip. In Language Resources and Evaluation Conference (LREC) (pp.6848–6854).
  • Chen et al., (2023) Chen, W., Spiridonova, I., Yang, J., Gao, J., & Li, C. (2023). Llava-interactive: An all-in-one demo for image chat, segmentation, generation and editing. arXiv preprint, arXiv:2311.00571.
  • Conwell & Ullman, (2022) Conwell, C. & Ullman, T.D. (2022). Testing relational understanding in text-guided image generation. arXiv preprint, arXiv:2208.00005.
  • Davis & Suignard, (2014) Davis, M. & Suignard, M. (2014). Unicode technical report #36, unicode security considerations. https://unicode.org/reports/tr36/. Accessed: 18-August-2022.
  • Deng et al., (2009) Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition (CVPR) (pp.248–255).
  • Friedrich et al., (2023) Friedrich, F., Schramowski, P., Brack, M., Struppek, L., Hintersdorf, D., Luccioni, S., & Kersting, K. (2023). Fair diffusion: Instructing text-to-image generation models on fairness. arXiv, arXiv: 2302.10893.
  • Gabrilovich & Gontmakher, (2002) Gabrilovich, E. & Gontmakher, A. (2002). The homograph attack. Communications of the ACM, 45(2), 128.
  • Gafni et al., (2022) Gafni, O., Polyak, A., Ashual, O., Sheynin, S., Parikh, D., & Taigman, Y. (2022). Make-a-scene: Scene-based text-to-image generation with human priors. In European Conference on Computer Vision (ECCV), volume 13675 (pp.89–106).
  • Gao et al., (2018) Gao, J., Lanchantin, J., Soffa, M.L., & Qi, Y. (2018). Black-box generation of adversarial text sequences to evade deep learning classifiers. In IEEE Security and Privacy Workshops (pp.50–56).
  • Goodfellow et al., (2015) Goodfellow, I.J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR).
  • Goyder & Frank, (2007) Goyder, J. & Frank, K. (2007). A scale of occupational prestige in canada, based on noc major groups. The Canadian Journal of Sociology, 32, 63 – 83.
  • Greenwald et al., (1998) Greenwald, A., McGhee, D., & Schwartz, J. (1998). Measuring individual differences in implicit cognition: The implicit association test. Journal of personality and social psychology, 74(6), 1464–1480.
  • Han et al., (2023) Han, S., Kim, H., & Lee, H.-S. (2023). A multilevel analysis of social capital and self-reported health: evidence from seoul, south korea. International Journal for Equity in Health, 11.
  • Heavenarchive, (2023) Heavenarchive, W.D. (2023). Welcome to the new surreal. how ai-generated video is changing film. MIT Technology Review. Accessed: 20-November-2023.
  • Heusel et al., (2017) Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S. (2017). Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Conference on Neural Information Processing Systems (NeurIPS), volume 30 (pp.6626–6637).
  • Hintersdorf et al., (2022) Hintersdorf, D., Struppek, L., Brack, M., Friedrich, F., Schramowski, P., & Kersting, K. (2022). Does clip know my face? arXiv preprint, arXiv:2209.07341.
  • Ho et al., (2020) Ho, J., Jain, A., & Abbeel, P. (2020). Denoising diffusion probabilistic models. In Conference on Neural Information Processing Systems (NeurIPS) (pp.6840–6851).
  • Hong et al., (2023) Hong, S., Lee, G., Jang, W., & Kim, S. (2023). Improving sample quality of diffusion models using self-attention guidance. In International Conference on Computer Vision and Pattern Recognition (CVPR) (pp.7462–7471).
  • Ilharco et al., (2021) Ilharco, G., Wortsman, M., Wightman, R., Gordon, C., Carlini, N., Taori, R., Dave, A., Shankar, V., Namkoong, H., Miller, J., Hajishirzi, H., Farhadi, A., & Schmidt, L. (2021). Openclip. https://github.com/mlfoundations/open_clip.
  • Kallus & Zhou, (2021) Kallus, N. & Zhou, A. (2021). Fairness, welfare, and equity in personalized pricing. In Conference on Fairness, Accountability, and Transparency (FAccT) (pp.296–314).
  • Kasy & Abebe, (2021) Kasy, M. & Abebe, R. (2021). Fairness, equality, and power in algorithmic decision-making. In Conference on Fairness, Accountability, and Transparency (FAccT) (pp.576–586).
  • Li et al., (2023) Li, J., Li, D., Savarese, S., & Hoi, S. C.H. (2023). BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. In International Conference on Machine Learning (ICML) (pp.19730–19742).
  • Liao, (2023) Liao, S. (2023). A.i. may help design your favorite video game character. New York Times. Accessed: 20-November-2023.
  • Lin et al., (2014) Lin, T.-Y., Maire, M., Belongie, S.J., Hays, J., Perona, P., Ramanan, D., Dollár, P., & Zitnick, C.L. (2014). Microsoft coco: Common objects in context. In European Conference on Computer Vision (ECCV) (pp.740–755).
  • Loshchilov & Hutter, (2019) Loshchilov, I. & Hutter, F. (2019). Decoupled weight decay regularization. In International Conference on Learning Representations (ICLR).
  • Luccioni et al., (2023) Luccioni, A.S., Akiki, C., Mitchell, M., & Jernite, Y. (2023). Stable bias: Analyzing societal representations in diffusion models. arXiv preprint, arXiv:2303.11408.
  • Mac, (2021) Mac, R. (2021). Facebook apologizes after a.i. puts ‘primates’ label on video of black men. https://www.nytimes.com/2021/09/03/technology/facebook-ai-race-primates.html. Accessed: 14-December-2022.
  • Marcus et al., (2022) Marcus, G., Davis, E., & Aaronson, S. (2022). A very preliminary analysis of DALL-E 2. arXiv preprint, arXiv:2204.13807.
  • Mehrabi et al., (2022) Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2022). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), 115:1–115:35.
  • Midjourney, (2022) Midjourney (2022). Midjourney. https://www.midjourney.com. Accessed: 10-October-2022.
  • Millière, (2022) Millière, R. (2022). Adversarial attacks on image generation with made-up words. arXiv preprint, arXiv:2208.04135.
  • Nichol & Dhariwal, (2021) Nichol, A. & Dhariwal, P. (2021). Improved denoising diffusion probabilistic models. In International Conference on Machine Learning (ICML) (pp.8162–8171).
  • Nichol et al., (2022) Nichol, A.Q., Dhariwal, P., Ramesh, A., Shyam, P., Mishkin, P., McGrew, B., Sutskever, I., & Chen, M. (2022). GLIDE: towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning (ICML) (pp.16784–16804).
  • Parmar et al., (2022) Parmar, G., Zhang, R., & Zhu, J.-Y. (2022). On aliased resizing and surprising subtleties in gan evaluation. In Conference on Computer Vision and Pattern Recognition (CVPR) (pp.11400–11410).
  • Pastaltzidis et al., (2022) Pastaltzidis, I., Dimitriou, N., Quezada-Tavarez, K., Aidinlis, S., Marquenie, T., Gurzawska, A., & Tzovaras, D. (2022). Data augmentation for fairness-aware machine learning: Preventing algorithmic bias in law enforcement systems. In Conference on Fairness, Accountability, and Transparency (FAccT) (pp.2302–2314).
  • Paullada et al., (2020) Paullada, A., Raji, I.D., Bender, E.M., Denton, E., & Hanna, A. (2020). Data and its (dis)contents: A survey of dataset development and use in machine learning research. Conference on Neural Information Processing Systems (NeurIPS), ML Retrospectives, Surveys & Meta-analyses (ML-RSA) Workshop.
  • Radford et al., (2021) Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., & Sutskever, I. (2021). Learning transferable visual models from natural language supervision. In International Conference on Machine Learning (ICML) (pp.8748–8763).
  • Radford et al., (2018) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2018). Language models are unsupervised multitask learners. https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf. Accessed: 28-August-2022.
  • Ramesh et al., (2022) Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with CLIP latents. arXiv preprint, arXiv:2204.06125.
  • Ramesh et al., (2021) Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., & Sutskever, I. (2021). Zero-shot text-to-image generation. In International Conference on Machine Learning (ICML) (pp.8821–8831).
  • Recht et al., (2019) Recht, B., Roelofs, R., Schmidt, L., & Shankar, V. (2019). Do imagenet classifiers generalize to imagenet? In International Conference on Machine Learning (ICML) (pp.5389–5400).
  • Rombach et al., (2022) Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Conference on Computer Vision and Pattern Recognition (CVPR) (pp.10684–10695).
  • Saharia et al., (2022) Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E.L., Ghasemipour, S. K.S., Lopes, R.G., Ayan, B.K., Salimans, T., Ho, J., Fleet, D.J., & Norouzi, M. (2022). Photorealistic text-to-image diffusion models with deep language understanding. In Conference on Neural Information Processing Systems (NeurIPS).
  • Schramowski et al., (2023) Schramowski, P., Brack, M., Deiseroth, B., & Kersting, K. (2023). Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models. In Conference on Computer Vision and Pattern Recognition (CVPR).
  • Schuhmann et al., (2022) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., Schramowski, P., Kundurthy, S., Crowson, K., Schmidt, L., Kaczmarczyk, R., & Jitsev, J. (2022). LAION-5B: an open large-scale dataset for training next generation image-text models. In Conference on Neural Information Processing Systems (NeurIPS).
  • Schuhmann et al., (2021) Schuhmann, C., Vencu, R., Beaumont, R., Kaczmarczyk, R., Mullis, C., Katta, A., Coombes, T., Jitsev, J., & Komatsuzaki, A. (2021). LAION-400M: open dataset of clip-filtered 400 million image-text pairs. arXiv preprint, arXiv:2111.02114.
  • Shokri et al., (2017) Shokri, R., Stronati, M., Song, C., & Shmatikov, V. (2017). Membership inference attacks against machine learning models. In Symposium on Security and Privacy (S&P) (pp.3–18).
  • Simpson et al., (2020) Simpson, G., Moore, T., & Clayton, R. (2020). Ten years of attacks on companies using visual impersonation of domain names. In APWG Symposium on Electronic Crime Research (eCrime) (pp.1–12).
  • Song & Ermon, (2020) Song, Y. & Ermon, S. (2020). Improved techniques for training score-based generative models. In Conference on Neural Information Processing Systems (NeurIPS) (pp.12438–12448).
  • (60) Struppek, L., Hintersdorf, D., arXiv preprinteia, A. D.A., Adler, A., & Kersting, K. (2022a). Plug & play attacks: Towards robust and flexible model inversion attacks. In International Conference on Machine Learning (ICML) (pp.20522–20545).
  • Struppek et al., (2023) Struppek, L., Hintersdorf, D., & Kersting, K. (2023). Rickrolling the artist: Injecting backdoors into text encoders for text-to-image synthesis. In International Conference on Computer Vision (ICCV).
  • (62) Struppek, L., Hintersdorf, D., Neider, D., & Kersting, K. (2022b). Learning to break deep perceptual hashing: The use case neuralhash. In Conference on Fairness, Accountability, and Transparency (FAccT) (pp.58–69).
  • Szegedy et al., (2014) Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., & Fergus, R. (2014). Intriguing properties of neural networks. In International Conference on Learning Representations (ICLR).
  • Unicode Consortium, (2022) Unicode Consortium (2022). The unicode standard 15.0.0. https://unicode.org/versions/Unicode15.0.0/. Accessed: 16-September-2022.
  • van der Maaten & Hinton, (2008) van der Maaten, L. & Hinton, G.E. (2008). Visualizing data using t-sne. Journal of Machine Learning Research, 9, 2579–2605.
  • Wang et al., (2021) Wang, J., Liu, Y., & Wang, X.E. (2021). Are gender-neutral queries really gender-neutral? mitigating gender bias in image search. In Conference on Empirical Methods in Natural Language Processing (EMNLP) (pp.1995–2008).
  • Wolfe & Caliskan, (2022) Wolfe, R. & Caliskan, A. (2022). Markedness in visual semantic AI. In Conference on Fairness, Accountability, and Transparency (FAccT) (pp.1269–1279).
  • Ye et al., (2023) Ye, F., Liu, G., Wu, X., & Wu, L. (2023). Altdiffusion: A multilingual text-to-image diffusion model. arXiv preprint, arXiv:2308.09991.
  • Yu et al., (2022) Yu, J., Xu, Y., Koh, J.Y., Luong, T., Baid, G., Wang, Z., Vasudevan, V., Ku, A., Yang, Y., Ayan, B.K., Hutchinson, B., Han, W., Parekh, Z., Li, X., Zhang, H., Baldridge, J., & Wu, Y. (2022). Scaling autoregressive models for content-rich text-to-image generation. Transactions on Machine Learning Research (TMLR), 2022.
  • Zhang et al., (2022) Zhang, D., Maslej, N., Brynjolfsson, E., Etchemendy, J., Lyons, T., Manyika, J., Ngo, H., Niebles, J.C., Sellitto, M., Sakhaee, E., Shoham, Y., Clark, J., & Perrault, C.R. (2022). The AI index 2022 annual report. arXiv preprint, arXiv:2205.03468.

A Unicode Scripts

Unicode supports a wide range of different scripts. We refer to https://www.unicode.org/standard/supported.html for an overview of all supported scripts. The current Unicode Standard 15.0.0(citep@BBN(Unicode Consortium,, 2022)) supports 149,186 characters from 161 scripts. Each script contains a set of characters and written signs of one or more writing systems. We now provide a short and non-exhaustive overview of the Unicode scripts we used throughout this work.

Basic Latin: Ranges from U+0000 to U+007F and contains 128 standard letters and digits used by Western languages, such as English, as well as basic punctuation and symbols. This paper, for example, is mostly encoded in the characters from this Script. Together with 18 additional blocks comprising supplements and extensions, the Latin script currently contains 1,475 characters.

Latin Supplements and Extensions: This group comprises multiple additional character variations of the basic Latin script. The Latin-1 Supplement ranges from U+0080 to U+00FF and offers characters for the French, German, and Scandinavian alphabets, amongst others. The Latin Extended-A (U+0100 to U+017F) and Extended-B (U+0180 to U+024F) scripts contain further Latin character variations for, e.g., Afrikaans, Hungarian, Turkish, and Romanian writing systems. The Latin Extended Additional scripts (U+1E00 to U+1EFF) primarily contain characters used in the Vietnamese alphabet. Some letters are also shared with other languages, e.g., Image 81: [Uncaptioned image] (U+1ECD) is not only used in Vietnamese but also in the International African alphabet. Further examples of the extended Latin script from the paper are the characters Image 82: Refer to caption (U+00E1) and Image 83: [Uncaptioned image] (U+00C5).

Arabic Script: Ranges from U+0600 to U+06FF and contains 256 characters of the Arabic script. The script is used for the Arabic, Kurdish, and Persian languages, amongst others. In the paper, we used the characters Image 84: [Uncaptioned image] (U+0647) and Image 85: [Uncaptioned image] (U+0627).

Armenian Script: Ranges from U+0530 to U+058F and contains 91 characters for the Armenian language, spoken in Armenia. In the paper, we used the character Image 86: [Uncaptioned image] (U+0585).

Bengali Script: Ranges from U+0980 to U+09FF and contains 96 characters for the Bengali, Santali, and other Indo-Aryan languages, mainly spoken in South Asia. Bengali is spoken in Bengal, a geopolitical and cultural region in South Asia, covering Bangladesh and West India. In the paper, we used the character Image 87: [Uncaptioned image] (U+09E6).

Unified Canadian Aboriginal Syllabics: Ranges from U+1400 to U+167F and contains 640 syllabic characters used in various Indigenous Canadian languages. These comprise the Algonquian, Inuit, and Athabaskan languages. In the paper, we used the character Image 88: Refer to caption (U+15C5).

Cherokee Script: Ranges from U+13A0 to U+13FF and contains 92 syllabic characters used for the Cherokee language. Cherokee is an Iroquoian language spoken by the Cherokee tribes, which are indigenous people in the Southeastern Woodlands of the United States. In the paper, we used the character Image 89: Refer to caption (U+13AA).

Cyrillic Script: Ranges from U+0400 to U+04FF and contains 256 characters from the Cyrillic writing system, also known as Slavonic script or Slavic script, and offers various national variations of the standard Cyrillic script. It is used in different countries and languages, such as Russian, Bulgarian, Serbian, or Ukrainian. Throughout this work, we only used letters from the standard Russian alphabet. Examples from the paper are the characters Image 90: [Uncaptioned image] (U+0412) and Image 91: [Uncaptioned image] (U+0435).

Devanagari Script: Ranges from U+0900 to U+097F and contains 128 characters for Hindi, which is spoken in India, and other Indo-Aryan languages. In the paper, we used the character Image 92: [Uncaptioned image] (U+0964).

Greek and Coptic Script: Ranges from U+0370 to U+03FF and contains 135 standard letters and letter variants, digits and other symbols of the Greek language. It also contains glyphs of the Coptic language, which belongs to the family of the Egyptian language. In this work, we only used standard Greek letters used in the modern Greek language. Examples from the paper are the characters Image 93: Refer to caption (U+0391) and Image 94: [Uncaptioned image] (U+03BF).

Hangul Jamo Script: Ranges from U+1100 to U+11FF and contains 256 positional forms of the Hangul consonant and vowel clusters. It is the official writing system for the Korean language, spoken in South and North Korea. In the paper, we used the character Image 95: Refer to caption (U+3147).

Lisu Script: Ranges from U+A4D0 to U+A4FF and contains 48 characters used to write the Lisu language. Lisu is spoken in Southwestern China, Myanmar, and Thailand, as well as a small part of India. In the paper, we used the character Image 96: Refer to caption (U+A4F2) and Image 97: Refer to caption (U+A4EE).

N’Ko script: Ranges from U+07C0 to U+07FF and contains 62 characters. It is used to write the Mande languages, spoken in West African countries, for example, Burkina Faso, Mali, Senegal, the Gambia, Guinea, Guinea-Bissau, Sierra Leone, Liberia, and Ivory Coast. In the paper, we used the character Image 98: [Uncaptioned image] (U+07CB).

Oriya Script: Ranges from U+0B00 to U+0B7F and contains 91 characters. It is mainly used to write the Orya (Odia), Khondi, and Santali languages, some of the many official languages of India. The languages are primarily spoken in the Indian state of Odisha and other states in eastern India. In the paper, we used the character Image 99: [Uncaptioned image] (U+0B66).

Osmanaya Script: Ranges from U+10480 to U+104AF and contains 40 characters. It is used to write the Somali language and is an official language in Somalia, Somaliland, and Ethiopia, all localized in the Horn of Africa (East Africa). In the paper, we used the character Image 100: [Uncaptioned image] (U+10486).

Tibetan Script: Ranges from U+0F00 to U+0FFF and contains 211 characters. The characters are primarily used to write Tibetan and Dzongkha, which is spoken in Bhutan. In the paper, we used the character Image 101: [Uncaptioned image] (U+0F0D).

Emojis: Emojis in Unicode are not contained in a single script or block but spread across 24 blocks. Unicode 14.0 contained 1,404 emoji characters. For example, the Emoticons block ranging from U+1F600 to UF1F64F contains 80 emojis of face representations. Examples from the paper are Image 102: Refer to caption (U+1F603) and Image 103: Refer to caption (U+1F973).

B Experimental Details

Hard- and Software. Most of our experiments were performed on NVIDIA DGX machines running NVIDIA DGX Server Version 5.1.0 and Ubuntu 20.04.5 LTS. The machines have 1.6TB of RAM and contain Tesla V100-SXM3-32GB-H GPUs and Intel Xeon Platinum 8174 CPUs. We further relied on CUDA 11.6, Python 3.8.13, and PyTorch 1.12.0 with Torchvision 0.13.0 for our experiments.

DALL-E 2. Our DALL-E 2 experiments were performed with the web API available at https://labs.openai.com/. Since OpenAI may further update either the DALL-E 2 model or the API over time, we note that all results depicted were generated between August 18 and December 15, 2022.

Stable Diffusion. We further used Stable Diffusion v1.5, which is available at https://huggingface.co/runwayml/stable-diffusion-v1-5 to generate the corresponding samples. It was used with a K-LMS scheduler with the parameters β s⁢t⁢a⁢r⁢t=0.00085 subscript 𝛽 𝑠 𝑡 𝑎 𝑟 𝑡 0.00085\beta_{start}=0.00085 italic_β start_POSTSUBSCRIPT italic_s italic_t italic_a italic_r italic_t end_POSTSUBSCRIPT = 0.00085, β e⁢n⁢d=0.012 subscript 𝛽 𝑒 𝑛 𝑑 0.012\beta_{end}=0.012 italic_β start_POSTSUBSCRIPT italic_e italic_n italic_d end_POSTSUBSCRIPT = 0.012, and a linear scaled scheduler. The generated images have a size of 512×512 512 512 512\times 512 512 × 512 and were generated with 100 100 100 100 inference steps and a guidance scale of 7.5 7.5 7.5 7.5. We set the seed to 1 1 1 1 for Stable Diffusion experiments and then generated four images for each prompt.

Relative Bias. To compute the Relative Bias on Stable Diffusion models, we generated a hundred images for each of the ten prompts in the prompt datasets, which are stated in Table 3, once with and once without a homoglyph inserted. We used the same seed for each set of images to avoid influences due to randomness. To compute the image and text embeddings, we used the ViT-H/14 OpenCLIP model, which promises the best zero-shot performance and is also trained on a different dataset than OpenAI’s CLIP models used in Stable Diffusion and DALL-E 2. For DALL-E 2, we generated only four images for each of the prompts due to the expensive queries.

Homoglyph Unlearning. To perform the homoglyph unlearning procedure, we optimized the pretrained CLIP text encoder for 500 500 500 500 steps on samples from the LAION-Aesthetics v2 6.5+ dataset(citep@BBN(Schuhmann et al.,, 2022)). This experiment was conducted on a machine that runs NVIDIA DGX Server Version 5.2.0 and Ubuntu 20.04.4 LTS. The machine has 2 TB of RAM and contains 8 Tesla NVIDIA A100-SXM4-80GB GPUs and 256 AMD EPYC 7742 64-core CPUs. During each step, we sampled a set B 𝐵 B italic_B of 128 prompts to compute the first term of the loss function on Latin-only prompts. To increase the encoder’s robustness to the homoglyphs, we sampled an additional set B h subscript 𝐵 ℎ B_{h}italic_B start_POSTSUBSCRIPT italic_h end_POSTSUBSCRIPT of 128 prompts for each of the five homoglyphs h∈H ℎ 𝐻 h\in H italic_h ∈ italic_H stated in Fig.6, and replaced a single Latin character in each prompt with its homoglyph counterpart h ℎ h italic_h. We then optimized the encoder with the AdamW optimizer(citep@BBN(Loshchilov & Hutter,, 2019)) and a learning rate of 10−4 superscript 10 4 10^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT. The learning rate was multiplied after 400 400 400 400 steps by the factor 0.1 0.1 0.1 0.1. We further kept β=(0.9,0.999)𝛽 0.9 0.999\beta=(0.9,0.999)italic_β = ( 0.9 , 0.999 ) and ϵ=10−8 italic-ϵ superscript 10 8\epsilon=10^{-8}italic_ϵ = 10 start_POSTSUPERSCRIPT - 8 end_POSTSUPERSCRIPT at their default values.

FID. We measured the FID score using the clean FID approach(citep@BBN(Parmar et al.,, 2022)). We sampled 10,000 prompts from the MS-COCO 2014(citep@BBN(Lin et al.,, 2014)) validation split and generated images with Stable Diffusion with the parameters stated at the beginning of this section. As real samples, we used all 40,504 images from the MS-COCO validation split.

Zero-Shot ImageNet Accuracy. To quantify to which extent the homoglyph unlearning approach hurts the performance of the encoder, we computed the zero-shot ImageNet prediction accuracy using the updated encoder in combination with CLIP’s clean ViT-L/14 image encoder. We followed the evaluation procedure described in (citet@BBN Radford et al., (2021)) using the Matched Frequency test images from the ImageNet-V2(citep@BBN(Recht et al.,, 2019)) dataset. Our evaluation code is based on https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb. The standard CLIP ViT-L/14 model without homoglyph unlearning performed achieves a zero-shot accuracy of Acc@1=69.82%Acc@1 percent 69.82\text{Acc@1}=69.82%Acc@1 = 69.82 % (top-1) and Acc@5=90.98%Acc@5 percent 90.98\text{Acc@5}=90.98%Acc@5 = 90.98 % (top-5), respectively.

B.1 Relative Bias Dataset Prompts

Table 3 states the individual prompts for the three created datasets to measure the Relative Bias in Section 4.2 for different homoglyphs in the domains People, Buildings, and Misc.

Table 3: Datasets used to measure the Relative Bias of homoglyphs for three domains. The placeholder <> marks the positions where the homoglyphs are injected. For the Latin prompts, the placeholder <> were simply removed.

B.2 VQA Score

Table 4 states questions used to compute the VQA Score on the BLIP-2 model.

Table 4: Datasets used to compute the VQA Score of homoglyphs for three domains. The placeholder <> marks the positions where the respective culture, e.g., African, is stated.

B.3 WEAT Test

Table 5 states the attribute and target sets we used to compute the WEAT test in Section 4.2.

Table 5: Attribute sets A,B 𝐴 𝐵 A,B italic_A , italic_B of characters from different scripts and target sets X,Y 𝑋 𝑌 X,Y italic_X , italic_Y of target words to compute the WEAT test.

C Additional Experiments

C.1 Relative Bias

Image 104: Refer to caption

Figure 11: Relative bias measured for five homoglyphs from different scripts on Stable Diffusion v1.4. The dark bars state the results for the standard text encoder. The light bars indicate the results after performing our homoglyph unlearning procedure on a single encoder for the five homoglyphs.

Image 105: Refer to caption

Figure 12: Relative bias measured for five homoglyphs from different scripts on Stable Diffusion v1.5. The dark bars state the results for the standard text encoder. The light bars indicate the results after performing our homoglyph unlearning procedure on a single encoder for the five homoglyphs.

Image 106: Refer to caption

Figure 13: We recomputed the relative bias on Stable Diffusion v1.5 with the African Image 107: Refer to caption (U+1ECD) but replaced the adjective African with the adjectives for ten African countries with the largest population size, i.e., Nigerian, Ethiopian, etc. We found that for most adjectives, the results confirm the relative bias values for the adjective African and the adjective choice does not necessarily change the depicted patterns. However, for some country-related adjectives, namely Egyptian, South African and Algerian, the relative bias is rather low. For Egypt as an intercontinental country, the low score might not be surprising since its stereotypical culture is quite different compared to other African countries. For South African, we hypothesize that the additional South distorts the computed text embeddings and kind of erases the influence of African in the prompt. And for Algerian, we suppose that our applied CLIP model has not learned to connect the word with stereotypical African content. One has, therefore, to make sure that the CLIP model recognizes the connection between images depicting a certain culture and the descriptive adjective. This could be tested beforehand by collecting images from the public internet and computing the clip similarity with the adjective.

Image 108: Refer to caption

Figure 14: Relative bias measured for five homoglyphs from different scripts on Stable Diffusion v2.1. The dark bars state the results for the standard text encoder. Compared to Stable Diffusion v1.x, the biases are smaller, and for Cyrillic and African scripts are almost completely removed. However, for the Korean homoglyph, the bias seems to be stronger.

Image 109: Refer to caption

Figure 15: Relative bias measured for five homoglyphs from different scripts on DALL-E 2. The bars state the results for the standard text encoder. Since DALL-E 2 does not support seeding, the generated images and, consequently, the measured Relative Bias includes more variance compared to Stable Diffusion. However, the biasing behavior is still clearly present.

Image 110: Refer to caption

Figure 16: Relative bias measured for five homoglyphs from different scripts on AltDiffusion-m18. The bars state the results for the standard text encoder. Compared to the Stable Diffusion models, AltDiffusion reduces the biases for most investigated homoglyphs. However, For the Korean character, there is still a notable bias but considerably lower than in the Stable Diffusion models. We conclude that training on multilingual data indeed reduces the model biases related to individual character scripts.

C.2 VQA Score

Image 111: Refer to caption

Figure 17: VQA Score measured for five homoglyphs from different scripts on Stable Diffusion v1.4. The score is stated for images generated with Latin-only prompts (dark colors) and prompts that contain a single homoglyph (light colors).

Image 112: Refer to caption

Figure 18: VQA Score measured for five homoglyphs from different scripts on Stable Diffusion v1.4 after the homoglyph unlearning procedure was performed. The score is stated for images generated with Latin-only prompts (dark colors) and prompts that contain a single homoglyph (light colors). After homoglyph unlearning, the scores for images generated with and without homoglyphs are close, indicating the success of the approach.

Image 113: Refer to caption

Figure 19: VQA Score measured for five homoglyphs from different scripts on Stable Diffusion v1.5. The score is stated for images generated with Latin-only prompts (dark colors) and prompts that contain a single homoglyph (light colors).

Image 114: Refer to caption

Figure 20: VQA Score measured for five homoglyphs from different scripts on Stable Diffusion v1.5 after the homoglyph unlearning procedure was performed. The score is stated for images generated with Latin-only prompts (dark colors) and prompts that contain a single homoglyph (light colors). After homoglyph unlearning, the scores for images generated with and without homoglyphs are close, indicating the success of the approach.

Image 115: Refer to caption

Figure 21: VQA Score measured for five homoglyphs from different scripts on Stable Diffusion v2.1. The score is stated for images generated with Latin-only prompts (dark colors) and prompts that contain a single homoglyph (light colors).

Image 116: Refer to caption

Figure 22: VQA Score measured for five homoglyphs from different scripts on AltDiffusion-m18. The score is stated for images generated with Latin-only prompts (dark colors) and prompts that contain a single homoglyph (light colors). The biasing effects of homoglyphs are overall notably reduced compared to the standard Stable Diffusion models. However, some influences, particularly for Greek and Korean homoglyphs, are still present.

D Additional DALL-E 2 Results

Here, we visualize additional results for the impact of homoglyphs on text-guided image generation with DALL-E 2.

D.1 A City in Bright Sunshine

Image 117: Refer to caption

Image 118: Refer to caption

Image 119: Refer to caption

Image 120: Refer to caption

(a)Standard Latin characters

Image 121: Refer to caption

Image 122: Refer to caption

Image 123: Refer to caption

Image 124: Refer to caption

(b)Greek Image 125: Refer to caption (U+0391)

Image 126: Refer to caption

Image 127: Refer to caption

Image 128: Refer to caption

Image 129: Refer to caption

(c)Scandinavian Image 130: Refer to caption (U+00C5)

Image 131: Refer to caption

Image 132: Refer to caption

Image 133: Refer to caption

Image 134: Refer to caption

(d)Cyrillic Image 135: Refer to caption (U+0410)

Image 136: Refer to caption

Image 137: Refer to caption

Image 138: Refer to caption

Image 139: Refer to caption

(e)Canadian Image 140: Refer to caption (U+15C5)

Image 141: Refer to caption

Image 142: Refer to caption

Image 143: Refer to caption

Image 144: Refer to caption

(f)Cherokee Image 145: Refer to caption (U+13AA)

Image 146: Refer to caption

Image 147: Refer to caption

Image 148: Refer to caption

Image 149: Refer to caption

(g)Latin Image 150: Refer to caption (U+00C0)

Image 151: Refer to caption

Image 152: Refer to caption

Image 153: Refer to caption

Image 154: Refer to caption

(h)Lisu Image 155: Refer to caption (U+A4EE)

Image 156: Refer to caption

Image 157: Refer to caption

Image 158: Refer to caption

Image 159: Refer to caption

(i)Mathematical Image 160: Refer to caption (U+1D5A0)

Figure 23: Non-cherry-picked examples of induced biases with a single homoglyph replacement. We queried DALL-E 2 with the following prompt: "A city in bright sunshine". Each query differs only by the first character A.

D.2 A Photo of an Actress

Image 161: Refer to caption

Image 162: Refer to caption

Image 163: Refer to caption

Image 164: Refer to caption

(a)Standard Latin characters

Image 165: Refer to caption

Image 166: Refer to caption

Image 167: Refer to caption

Image 168: Refer to caption

(b)Oriya Image 169: Refer to caption (U+0B66)

Image 170: Refer to caption

Image 171: Refer to caption

Image 172: Refer to caption

Image 173: Refer to caption

(c)Osmanya Image 174: Refer to caption (U+10486)

Image 175: Refer to caption

Image 176: Refer to caption

Image 177: Refer to caption

Image 178: Refer to caption

(d)Vietnamese Image 179: Refer to caption (U+1ECD)

Image 180: Refer to caption

Image 181: Refer to caption

Image 182: Refer to caption

Image 183: Refer to caption

(e)N’Ko Image 184: Refer to caption (U+07CB)

Image 185: Refer to caption

Image 186: Refer to caption

Image 187: Refer to caption

Image 188: Refer to caption

(f)Hangul (Korean) Image 189: Refer to caption (U+3147)

Image 190: Refer to caption

Image 191: Refer to caption

Image 192: Refer to caption

Image 193: Refer to caption

(g)Arabic Image 194: Refer to caption (U+0647)

Image 195: Refer to caption

Image 196: Refer to caption

Image 197: Refer to caption

Image 198: Refer to caption

(h)Armenian Image 199: Refer to caption (U+0585)

Image 200: Refer to caption

Image 201: Refer to caption

Image 202: Refer to caption

Image 203: Refer to caption

(i)Bengali Image 204: Refer to caption (U+09E6)

Figure 24: Non-cherry-picked examples of induced biases with a single homoglyph replacement. We queried DALL-E 2 with the following prompt: "A photo o f an actress". Each query differs only by the o in of.

D.3 Delicious Food on a Table

Image 205: Refer to caption

Image 206: Refer to caption

Image 207: Refer to caption

Image 208: Refer to caption

(a)Standard Latin characters

Image 209: Refer to caption

Image 210: Refer to caption

Image 211: Refer to caption

Image 212: Refer to caption

(b)Latin Image 213: Refer to caption→→\rightarrow→ Arabic Image 214: Refer to caption (U+0647)

Image 215: Refer to caption

Image 216: Refer to caption

Image 217: Refer to caption

Image 218: Refer to caption

(c)Latin Image 219: Refer to caption→→\rightarrow→ Cyrillic Image 220: Refer to caption (U+0435)

Image 221: Refer to caption

Image 222: Refer to caption

Image 223: Refer to caption

Image 224: Refer to caption

(d)Latin Image 225: Refer to caption→→\rightarrow→ Devanagari Image 226: Refer to caption (U+0964)

Image 227: Refer to caption

Image 228: Refer to caption

Image 229: Refer to caption

Image 230: Refer to caption

(e)Latin Image 231: Refer to caption→→\rightarrow→ Greek Image 232: Refer to caption (U+03BF)

Image 233: Refer to caption

Image 234: Refer to caption

Image 235: Refer to caption

Image 236: Refer to caption

(f)Latin Image 237: Refer to caption→→\rightarrow→ Korean Image 238: Refer to caption (U+3147)

Image 239: Refer to caption

Image 240: Refer to caption

Image 241: Refer to caption

Image 242: Refer to caption

(g)Latin Image 243: Refer to caption→→\rightarrow→ Lisu Image 244: Refer to caption (U+A4F2)

Image 245: Refer to caption

Image 246: Refer to caption

Image 247: Refer to caption

Image 248: Refer to caption

(h)Latin Image 249: Refer to caption→→\rightarrow→ Tibetan Image 250: Refer to caption (U+0F0D)

Image 251: Refer to caption

Image 252: Refer to caption

Image 253: Refer to caption

Image 254: Refer to caption

(i)Latin Image 255: Refer to caption→→\rightarrow→ Vietnamese Image 256: Refer to caption (U+1ECD)

Figure 25: Non-cherry-picked examples of induced biases with a single homoglyph replacement. We queried DALL-E 2 with the following prompt: "Delicious food on a table". Each query differs only by a single character in the word Delicious replaced by the stated homoglyphs.

D.4 The Leader of a Country

Image 257: Refer to caption

Image 258: Refer to caption

Image 259: Refer to caption

Image 260: Refer to caption

(a)Standard Latin characters

Image 261: Refer to caption

Image 262: Refer to caption

Image 263: Refer to caption

Image 264: Refer to caption

(b)Scandinavian Image 265: Refer to caption (U+00E5)

Image 266: Refer to caption

Image 267: Refer to caption

Image 268: Refer to caption

Image 269: Refer to caption

(c)Cyrillic Image 270: Refer to caption (U+0430)

Image 271: Refer to caption

Image 272: Refer to caption

Image 273: Refer to caption

Image 274: Refer to caption

(d)Greek Image 275: Refer to caption (U+03B1)

Image 276: Refer to caption

Image 277: Refer to caption

Image 278: Refer to caption

Image 279: Refer to caption

(e)Latin Ext. Image 280: Refer to caption (U+00E1)

Image 281: Refer to caption

Image 282: Refer to caption

Image 283: Refer to caption

Image 284: Refer to caption

(f)Latin Ext. Image 285: Refer to caption (U+00E0)

Image 286: Refer to caption

Image 287: Refer to caption

Image 288: Refer to caption

Image 289: Refer to caption

(g)Latin Ext. Image 290: Refer to caption (U+00E2)

Image 291: Refer to caption

Image 292: Refer to caption

Image 293: Refer to caption

Image 294: Refer to caption

(h)Latin Ext. Image 295: Refer to caption (U+00E3)

Image 296: Refer to caption

Image 297: Refer to caption

Image 298: Refer to caption

Image 299: Refer to caption

(i)Latin Ext. Image 300: Refer to caption (U+FF41)

Figure 26: Non-cherry-picked examples of induced biases with a single homoglyph replacement. We queried DALL-E 2 with the following prompt: "The leader of a country". Each query differs by the article a replaced by the stated homoglyphs.

D.5 A Photo of a Flag

Image 301: Refer to caption

Image 302: Refer to caption

Image 303: Refer to caption

Image 304: Refer to caption

(a)Standard Latin characters

Image 305: Refer to caption

Image 306: Refer to caption

Image 307: Refer to caption

Image 308: Refer to caption

(b)Greek Image 309: Refer to caption (U+0391)

Image 310: Refer to caption

Image 311: Refer to caption

Image 312: Refer to caption

Image 313: Refer to caption

(c)Scandinavian Image 314: Refer to caption (U+00C5)

Image 315: Refer to caption

Image 316: Refer to caption

Image 317: Refer to caption

Image 318: Refer to caption

(d)Cherokee Image 319: Refer to caption (U+13AA)

Image 320: Refer to caption

Image 321: Refer to caption

Image 322: Refer to caption

Image 323: Refer to caption

Image 324: Refer to caption

Image 325: Refer to caption

Image 326: Refer to caption

Image 327: Refer to caption

(e)Cyrillic Image 328: Refer to caption (U+0410)

Figure 27: Non-cherry-picked examples of induced biases with a single homoglyph replacement. We queried DALL-E 2 with the following prompt: "A photo of a flag". Each query differs by the article A replaced by the stated homoglyphs. Whereas the model has a learned bias towards generating USA flags, inducing a Greek bias leads to the generation of Greek flags. Surprisingly, using a Cyrillic bias enables the model to generate a wide range of different flags from European countries.

D.6 A Photo of a Person

Image 329: Refer to caption

Image 330: Refer to caption

Image 331: Refer to caption

Image 332: Refer to caption

(a)Smiling face Image 333: Refer to caption (U+1F603)

Image 334: Refer to caption

Image 335: Refer to caption

Image 336: Refer to caption

Image 337: Refer to caption

(b)Swearing face Image 338: Refer to caption (U+1F92C)

Image 339: Refer to caption

Image 340: Refer to caption

Image 341: Refer to caption

Image 342: Refer to caption

(c)Crying face Image 343: Refer to caption (U+1F62D)

Image 344: Refer to caption

Image 345: Refer to caption

Image 346: Refer to caption

Image 347: Refer to caption

(d)Love face Image 348: Refer to caption (U+1F970)

Image 349: Refer to caption

Image 350: Refer to caption

Image 351: Refer to caption

Image 352: Refer to caption

(e)Screaming face Image 353: Refer to caption (U+1F631)

Image 354: Refer to caption

Image 355: Refer to caption

Image 356: Refer to caption

Image 357: Refer to caption

(f)Celebrating face Image 358: Refer to caption (U+1F973)

Image 359: Refer to caption

Image 360: Refer to caption

Image 361: Refer to caption

Image 362: Refer to caption

(g)Nerd face Image 363: Refer to caption (U+1F913)

Image 364: Refer to caption

Image 365: Refer to caption

Image 366: Refer to caption

Image 367: Refer to caption

(h)Monkey Face Image 368: Refer to caption (U+1F435)

Image 369: Refer to caption

Image 370: Refer to caption

Image 371: Refer to caption

Image 372: Refer to caption

(i)Bactrian Camel Image 373: Refer to caption (U+1F42B)

Figure 28: Non-cherry-picked examples of induced biases with a single emoji added. We queried DALL-E 2 with the following prompt: "A photo of a X person". Each query differs by adding an emoji at the X position.

E Additional Stable Diffusion Results

Here, we visualize additional results for the impact of homoglyphs on text-guided image generation with Stable Diffusion 2.

E.1 A Photo of an Actress

Image 374: Refer to caption

Image 375: Refer to caption

Image 376: Refer to caption

Image 377: Refer to caption

(a)Standard Latin characters

Image 378: Refer to caption

Image 379: Refer to caption

Image 380: Refer to caption

Image 381: Refer to caption

(b)Oriya (Indian) Image 382: Refer to caption (U+0B66)

Image 383: Refer to caption

Image 384: Refer to caption

Image 385: Refer to caption

Image 386: Refer to caption

(c)Osmanya Image 387: Refer to caption (U+10486)

Image 388: Refer to caption

Image 389: Refer to caption

Image 390: Refer to caption

Image 391: Refer to caption

(d)African Image 392: Refer to caption (U+1ECD)

Image 393: Refer to caption

Image 394: Refer to caption

Image 395: Refer to caption

Image 396: Refer to caption

(e)N’Ko (West African) Image 397: Refer to caption (U+07CB)

Image 398: Refer to caption

Image 399: Refer to caption

Image 400: Refer to caption

Image 401: Refer to caption

(f)Hangul (Korean) Image 402: Refer to caption (U+3147)

Image 403: Refer to caption

Image 404: Refer to caption

Image 405: Refer to caption

Image 406: Refer to caption

(g)Arabic Image 407: Refer to caption (U+0647)

Image 408: Refer to caption

Image 409: Refer to caption

Image 410: Refer to caption

Image 411: Refer to caption

(h)Armenian Image 412: Refer to caption (U+0585)

Image 413: Refer to caption

Image 414: Refer to caption

Image 415: Refer to caption

Image 416: Refer to caption

(i)Bengali Image 417: Refer to caption (U+09E6)

Figure 29: Non-cherry-picked examples of induced biases with a single homoglyph replacement. We queried Stable Diffusion v1.5 with the following prompt: "A photo o f an actress". Each query differs only by the o in of.

E.2 Delicious Food on a Table

Image 418: Refer to caption

Image 419: Refer to caption

Image 420: Refer to caption

Image 421: Refer to caption

(a)Standard Latin characters

Image 422: Refer to caption

Image 423: Refer to caption

Image 424: Refer to caption

Image 425: Refer to caption

(b)Latin Image 426: Refer to caption→→\rightarrow→ African Image 427: Refer to caption (U+1ECD)

Image 428: Refer to caption

Image 429: Refer to caption

Image 430: Refer to caption

Image 431: Refer to caption

(c)Latin Image 432: Refer to caption→→\rightarrow→ Arabic Image 433: Refer to caption (U+0647)

Image 434: Refer to caption

Image 435: Refer to caption

Image 436: Refer to caption

Image 437: Refer to caption

(d)Latin Image 438: Refer to caption→→\rightarrow→ Cyrillic Image 439: Refer to caption (U+0435)

Image 440: Refer to caption

Image 441: Refer to caption

Image 442: Refer to caption

Image 443: Refer to caption

(e)Latin Image 444: Refer to caption→→\rightarrow→ Devanagari Image 445: Refer to caption (U+0964)

Image 446: Refer to caption

Image 447: Refer to caption

Image 448: Refer to caption

Image 449: Refer to caption

(f)Latin Image 450: Refer to caption→→\rightarrow→ Greek Image 451: Refer to caption (U+03BF)

Image 452: Refer to caption

Image 453: Refer to caption

Image 454: Refer to caption

Image 455: Refer to caption

(g)Latin Image 456: Refer to caption→→\rightarrow→ Korean Image 457: Refer to caption (U+3147)

Image 458: Refer to caption

Image 459: Refer to caption

Image 460: Refer to caption

Image 461: Refer to caption

(h)Latin Image 462: Refer to caption→→\rightarrow→ Lisu Image 463: Refer to caption (U+A4F2)

Image 464: Refer to caption

Image 465: Refer to caption

Image 466: Refer to caption

Image 467: Refer to caption

(i)Latin Image 468: Refer to caption→→\rightarrow→ Tibetan Image 469: Refer to caption (U+0F0D)

Figure 30: Non-cherry-picked examples of induced biases with a single homoglyph replacement. We queried Stable Diffusion v1.5 with the following prompt: "Delicious food on a table". Each query differs only by a single character in the word Delicious replaced by the stated homoglyphs.

E.3 Homoglyph Unlearning Results

Image 470: Refer to caption

Figure 31: Comparison of image bias and quality of the standard text encoder before and after homoglyph unlearning. We queried each model with three different prompts and five different homoglyphs inserted at the position marked by <>. The top rows state the images for the standard text encoder, and the bottom rows depict the results after the homoglyph unlearning procedure.

E.4 Inducing Biases in the Embedding Space

Image 471: Refer to caption

Image 472: Refer to caption

Image 473: Refer to caption

Image 474: Refer to caption

(a)No bias induced.

Image 475: Refer to caption

Image 476: Refer to caption

Image 477: Refer to caption

Image 478: Refer to caption

(b)Oriya Image 479: Refer to caption (U+0B66).

Image 480: Refer to caption

Image 481: Refer to caption

Image 482: Refer to caption

Image 483: Refer to caption

(c)Osmanya Image 484: Refer to caption (U+10486).

Image 485: Refer to caption

Image 486: Refer to caption

Image 487: Refer to caption

Image 488: Refer to caption

(d)African Image 489: Refer to caption (U+1ECD).

Image 490: Refer to caption

Image 491: Refer to caption

Image 492: Refer to caption

Image 493: Refer to caption

(e)N’Ko Image 494: Refer to caption (U+07CB).

Image 495: Refer to caption

Image 496: Refer to caption

Image 497: Refer to caption

Image 498: Refer to caption

(f)Hangul (Korean) Image 499: Refer to caption (U+3147).

Image 500: Refer to caption

Image 501: Refer to caption

Image 502: Refer to caption

Image 503: Refer to caption

(g)Arabic Image 504: Refer to caption (U+0647).

Image 505: Refer to caption

Image 506: Refer to caption

Image 507: Refer to caption

Image 508: Refer to caption

(h)Armenian Image 509: Refer to caption (U+0585).

Image 510: Refer to caption

Image 511: Refer to caption

Image 512: Refer to caption

Image 513: Refer to caption

(i)Bengali Image 514: Refer to caption (U+09E6).

Figure 32: Non-cherry-picked examples of biases induced into the embedding space. We queried Stable Diffusion with the following prompt: "A man sitting at a table". We further computed the difference between the text embeddings of the stated non-Latin homoglyphs and the Latin character Image 515: Refer to caption (U+006F). We then added the difference to the prompt embedding to induce cultural biases. See Fig.7(b) for an overview of the approach.

E.5 Varying the Number of Injected Homoglyphs for Complex Prompts

Image 516: Refer to caption

Image 517: Refer to caption

Image 518: Refer to caption

Image 519: Refer to caption

(a)Standard Latin characters

Image 520: Refer to caption

Image 521: Refer to caption

Image 522: Refer to caption

Image 523: Refer to caption

(b)1x African Image 524: Refer to caption (U+1ECD)

Image 525: Refer to caption

Image 526: Refer to caption

Image 527: Refer to caption

Image 528: Refer to caption

(c)1x Hangul (Korean) Image 529: Refer to caption (U+3147)

Image 530: Refer to caption

Image 531: Refer to caption

Image 532: Refer to caption

Image 533: Refer to caption

(d)2x African Image 534: Refer to caption (U+1ECD)

Image 535: Refer to caption

Image 536: Refer to caption

Image 537: Refer to caption

Image 538: Refer to caption

(e)2x Hangul (Korean) Image 539: Refer to caption (U+3147)

Image 540: Refer to caption

Image 541: Refer to caption

Image 542: Refer to caption

Image 543: Refer to caption

(f)3x African Image 544: Refer to caption (U+1ECD)

Image 545: Refer to caption

Image 546: Refer to caption

Image 547: Refer to caption

Image 548: Refer to caption

(g)3x Hangul (Korean) Image 549: Refer to caption (U+3147)

Figure 33: In complex prompts, the effects of homoglyphs might reduce or even vanish. However, by inserting multiple homoglyphs, their biasing effects can be amplified. Also, explicitly stated attributes, e.g., blond hair might interfere with triggered biases. The images were generated with the prompts A photo close-up o f a beautiful black haired woman, fashi o n editorial, studi o photography, elegant, 8k, hyperdetailed and A photo close-up o f a beautiful blonde haired man, fashi o n editorial, studi o photography, elegant, 8k, hyperdetailed. We then replaced 1, 2 or 3 of the underlined characters with the specified homoglyphs, starting from the first underlined characters.

E.6 MS-COCO Examples

Image 550: Refer to caption

Image 551: Refer to caption

Image 552: Refer to caption

Image 553: Refer to caption

Image 554: Refer to caption

Image 555: Refer to caption

Image 556: Refer to caption

Image 557: Refer to caption

Image 558: Refer to caption

Image 559: Refer to caption

Image 560: Refer to caption

Image 561: Refer to caption

Image 562: Refer to caption

Image 563: Refer to caption

Image 564: Refer to caption

Image 565: Refer to caption

Image 566: Refer to caption

Image 567: Refer to caption

(a)Standard Encoder

Image 568: Refer to caption

Image 569: Refer to caption

Image 570: Refer to caption

Image 571: Refer to caption

Image 572: Refer to caption

Image 573: Refer to caption

Image 574: Refer to caption

Image 575: Refer to caption

Image 576: Refer to caption

Image 577: Refer to caption

Image 578: Refer to caption

Image 579: Refer to caption

Image 580: Refer to caption

Image 581: Refer to caption

Image 582: Refer to caption

Image 583: Refer to caption

Image 584: Refer to caption

Image 585: Refer to caption

(b)Homoglyph Unlearning

Figure 34: Randomly selected samples generated on prompts from the MS-COCO validation split we used to compute the FID score. Images were generated with the text encoder before and after the homoglyph unlearning procedure performed. The results demonstrate that the unlearning does not hurt the model’s utility and only induces small variations in the images.

Xet Storage Details

Size:
204 kB
·
Xet hash:
100c4b38fc66c677ee57840498817e2e81a486ab489dec3067ac84ae6af488f5

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.