ACL-OCL / Base_JSON /prefixG /json /gebnlp /2020.gebnlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:02:39.843462Z"
},
"title": "Can Existing Methods Debias Languages Other than English? First Attempt to Analyze and Mitigate Japanese Word Embeddings",
"authors": [
{
"first": "Masashi",
"middle": [],
"last": "Takeshita",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hokkaido University",
"location": {
"settlement": "Sapporo",
"country": "Japan"
}
},
"email": "takeshita.masashi@ist.hokudai.ac.jp"
},
{
"first": "Yuki",
"middle": [],
"last": "Katsumata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hokkaido University",
"location": {
"settlement": "Sapporo",
"country": "Japan"
}
},
"email": "katsumata@ist.hokudai.ac.jp"
},
{
"first": "Rafal",
"middle": [],
"last": "Rzepka",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hokkaido University",
"location": {
"settlement": "Sapporo",
"country": "Japan"
}
},
"email": "rzepka@ist.hokudai.ac.jp"
},
{
"first": "Kenji",
"middle": [],
"last": "Araki",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Hokkaido University",
"location": {
"settlement": "Sapporo",
"country": "Japan"
}
},
"email": "araki@ist.hokudai.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "It is known that word embeddings exhibit biases inherited from the corpus, and those biases reflect social stereotypes. Recently, many studies have been conducted to analyze and mitigate biases in word embeddings. Unsupervised Bias Enumeration (UBE) (Swinger et al., 2019) is one of approach to analyze biases for English, and Hard Debias (Bolukbasi et al., 2016) is the common technique to mitigate gender bias. These methods focused on English, or, in smaller extent, on Indo-European languages. However, it is not clear whether these methods can be generalized to other languages. In this paper, we apply these analyzing and mitigating methods, UBE and Hard Debias, to Japanese word embeddings. Additionally, we examine whether these methods can be used for Japanese. We experimentally show that UBE and Hard Debias cannot be sufficiently adapted to Japanese embeddings.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "It is known that word embeddings exhibit biases inherited from the corpus, and those biases reflect social stereotypes. Recently, many studies have been conducted to analyze and mitigate biases in word embeddings. Unsupervised Bias Enumeration (UBE) (Swinger et al., 2019) is one of approach to analyze biases for English, and Hard Debias (Bolukbasi et al., 2016) is the common technique to mitigate gender bias. These methods focused on English, or, in smaller extent, on Indo-European languages. However, it is not clear whether these methods can be generalized to other languages. In this paper, we apply these analyzing and mitigating methods, UBE and Hard Debias, to Japanese word embeddings. Additionally, we examine whether these methods can be used for Japanese. We experimentally show that UBE and Hard Debias cannot be sufficiently adapted to Japanese embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Word embeddings are widely used in natural language processing tasks, and they have been reported to inherit social stereotypes, e.g. gender and racial stereotypes (Bolukbasi et al., 2016; Caliskan et al., 2017) . For example, \"programmer\" and \"homemaker\" should be gender neutral by definition, but the analogy of \"man is to programmer as woman is to homemaker\" holds as observed by Bolukbasi et al. (2016) . Such biases cause differences in F1 scores between the pro-and anti-stereotypical conditions. For example in the coreference resolution task, it is difficult to correctly link \"physician:she\" and \"secretary:he\" for systems which use gender-biased word embeddings, because \"physician:he\" and \"secretary:she\" are strongly related more than \"physician:she\" and \"secretary:he\" in the word embeddings (Zhao et al., 2018a) . Therefore, in recent years, research has been conducted to mitigate the bias in word embeddings (Bolukbasi et al., 2016; Zhao et al., 2018b; Wang et al., 2020) . However, to the authors' best knowledge, most of them have focused on English (Sun et al., 2019; Blodgett et al., 2020) , and no study has addressed word embeddings of languages other than Indo-European languages about bias analysis and mitigation.",
"cite_spans": [
{
"start": 164,
"end": 188,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 189,
"end": 211,
"text": "Caliskan et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 384,
"end": 407,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF4"
},
{
"start": 806,
"end": 826,
"text": "(Zhao et al., 2018a)",
"ref_id": "BIBREF27"
},
{
"start": 925,
"end": 949,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 950,
"end": 969,
"text": "Zhao et al., 2018b;",
"ref_id": "BIBREF28"
},
{
"start": 970,
"end": 988,
"text": "Wang et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 1069,
"end": 1087,
"text": "(Sun et al., 2019;",
"ref_id": "BIBREF21"
},
{
"start": 1088,
"end": 1110,
"text": "Blodgett et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We hypothesize that it is not obvious that the method developed for English can be easily adapted to other languages for two following reasons. First is due to various grammatical features which do not exist in English. Embeddings can have different characteristics depending on language, for example Spanish words have gender which leads to the grammatical gender bias (Zhou et al., 2019) . There is a substantial risk that we cannot adapt the bias mitigation methods meant for English while working on such a language. Secondly, especially when the language family differs, not only the characteristics of a given language but also the cultural background of its users changes, which in turn influences further the bias in the embeddings (Raijmakers, 2020) . Therefore, it may not be possible to directly apply bias analysis and mitigation methods developed for English to other languages.",
"cite_spans": [
{
"start": 370,
"end": 389,
"text": "(Zhou et al., 2019)",
"ref_id": "BIBREF30"
},
{
"start": 740,
"end": 758,
"text": "(Raijmakers, 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Bias statement Following categorization of Crawford (2017) , we focus on representational bias, especially stereotyping one, which means that a system \"propagates negative generalisations about particular social groups\" (Blodgett et al., 2020, p.5456) . Stereotyping happens in natural language processing tasks when an unfair association of words represents a particular social group with other concepts (not included in its definition), like an analogy of \"man is to programmer as woman is to homemaker\". If an AI agent has such stereotypes, they can appear in its output as reported in works on dialogue systems (Liu et al., 2019) , possibly harming users.",
"cite_spans": [
{
"start": 43,
"end": 58,
"text": "Crawford (2017)",
"ref_id": "BIBREF7"
},
{
"start": 220,
"end": 251,
"text": "(Blodgett et al., 2020, p.5456)",
"ref_id": null
},
{
"start": 615,
"end": 633,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are several works on stereotypes in word embeddings for English (Bolukbasi et al., 2016; Zhao et al., 2018b; Wang et al., 2020) and some other languages (Sahlgren and Olsson, 2019; Pujari et al., 2019) , but to the authors' best knowledge, research regarding Japanese word embeddings does not exist.",
"cite_spans": [
{
"start": 70,
"end": 94,
"text": "(Bolukbasi et al., 2016;",
"ref_id": "BIBREF4"
},
{
"start": 95,
"end": 114,
"text": "Zhao et al., 2018b;",
"ref_id": "BIBREF28"
},
{
"start": 115,
"end": 133,
"text": "Wang et al., 2020)",
"ref_id": "BIBREF25"
},
{
"start": 159,
"end": 186,
"text": "(Sahlgren and Olsson, 2019;",
"ref_id": "BIBREF19"
},
{
"start": 187,
"end": 207,
"text": "Pujari et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we analyze the representational bias in Japanese word embeddings, and attempt to mitigate gender bias by using existing methods designed for English. We also show that those methods are difficult to generalize to Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section describes bias analysis and gender bias mitigation for English word embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work 2.1 Bias in word embeddings and its mitigation for English",
"sec_num": "2"
},
{
"text": "Caliskan et al. 2017proposed the Word Embedding Association Test (WEAT) to evaluate the inherent social biases in embedding. WEAT measures the difference of semantic similarity with a word embedding between two sets of target words (e.g. \"male\" and \"female\" names) and attribute words (e.g. \"career\" and \"family\" terms). This metric was used to show that social biases of embeddings are correlated with social stereotypes and the proportion of gender of workers in each occupation. Swinger et al. (2019) adapted WEAT and proposed Unsupervised Bias Enumeration (UBE) to discover the biases in embedding by unsupervised clustering using first names. They asked crowdworkers to evaluate the results of WEATs which are outputted by UBE and confirm if these results capture social stereotypes, such as gender as well as religion and race. Bolukbasi et al. (2016) confirmed the existence of the gender bias in English word embeddings, and proposed a method called Hard Debias to mitigate the gender bias. Hard Debias uses words that should be neutral to gender, such as \"doctor\" and \"programmer\", and reduces the bias by subtracting the vector components of gender directions from gender neutral words. Gender directions are defined by the first principal component of a word vector of each word consisting of a gender definition word pairs, such as \"she\" and \"he\".",
"cite_spans": [
{
"start": 482,
"end": 503,
"text": "Swinger et al. (2019)",
"ref_id": "BIBREF23"
},
{
"start": 834,
"end": 857,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias analysis",
"sec_num": "2.1.1"
},
{
"text": "However, Gonen and Goldberg (2019) proved experimentally that Hard Debias could not sufficiently remove gender bias and that it can be recovered from embeddings after mitigation.",
"cite_spans": [
{
"start": 9,
"end": 34,
"text": "Gonen and Goldberg (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches to bias mitigation",
"sec_num": "2.1.2"
},
{
"text": "In the work of Mu and Viswanath (2018) , the most statistically dominant principal components are encoding the frequency of words. Their method improves performance of embedding by subtracting the common mean vector from each word vector and removing the dominant principal components. Wang et al. (2020) proposed Double-Hard Debias which was inspired by work of Mu and Viswanath (2018) . They improved Hard Debias by deciding the dominant principal component of gender bias before performing Hard Debias. Experiments on English embeddings, including the neighborhood metric (Gonen and Goldberg, 2019) , showed improved results.",
"cite_spans": [
{
"start": 15,
"end": 38,
"text": "Mu and Viswanath (2018)",
"ref_id": "BIBREF15"
},
{
"start": 286,
"end": 304,
"text": "Wang et al. (2020)",
"ref_id": "BIBREF25"
},
{
"start": 363,
"end": 386,
"text": "Mu and Viswanath (2018)",
"ref_id": "BIBREF15"
},
{
"start": 575,
"end": 601,
"text": "(Gonen and Goldberg, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches to bias mitigation",
"sec_num": "2.1.2"
},
{
"text": "All of the above-mentioned research examples work on English language. Next, we present studies on the bias inherent in non-English embeddings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Approaches to bias mitigation",
"sec_num": "2.1.2"
},
{
"text": "There are two major directions of research on non-English word embedding bias. The first is a bias study of multilingual embeddings, which compares what biases exist in embeddings available in both English and other languages, e.g. Spanish and French, and how they differ depending on language (Zhou et al., 2019; Zhao et al., 2020) . The second direction is to address biases in monolingual embeddings of languages other than English (Zhou et al., 2019; Sahlgren and Olsson, 2019; Pujari et al., 2019; Raijmakers, 2020) . For example, the gender bias has been found and mitigated in Swedish (Sahlgren and Olsson, 2019) and Hindi (Pujari et al., 2019) . Both used Hard Debias for gender bias mitigation -Sahlgren and Olsson (2019) could not mitigate the gender bias but Pujari et al. (2019) were able to achieve that goal. However, (Sahlgren and Olsson, 2019) analyzed their results only partially. The problem in the Pujari et al. (2019) method is that they used Support Vector Machine (SVM) trained on gender-biased embeddings during Hard Debias evaluation for Hindi. Raijmakers (2020) proposed a WEAT-extended method to investigate gender bias in monolingual embeddings of 26 languages, including Japanese, but did not attempt to mitigate any of them. This work also lacks a detailed analysis, as it only investigates the overall gender bias of embeddings and does not assess whether gender neutral words have gender bias.",
"cite_spans": [
{
"start": 294,
"end": 313,
"text": "(Zhou et al., 2019;",
"ref_id": "BIBREF30"
},
{
"start": 314,
"end": 332,
"text": "Zhao et al., 2020)",
"ref_id": "BIBREF29"
},
{
"start": 435,
"end": 454,
"text": "(Zhou et al., 2019;",
"ref_id": "BIBREF30"
},
{
"start": 455,
"end": 481,
"text": "Sahlgren and Olsson, 2019;",
"ref_id": "BIBREF19"
},
{
"start": 482,
"end": 502,
"text": "Pujari et al., 2019;",
"ref_id": "BIBREF17"
},
{
"start": 503,
"end": 520,
"text": "Raijmakers, 2020)",
"ref_id": null
},
{
"start": 592,
"end": 619,
"text": "(Sahlgren and Olsson, 2019)",
"ref_id": "BIBREF19"
},
{
"start": 624,
"end": 651,
"text": "Hindi (Pujari et al., 2019)",
"ref_id": null
},
{
"start": 832,
"end": 859,
"text": "(Sahlgren and Olsson, 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word embedding biases in languages other than English",
"sec_num": "2.2"
},
{
"text": "In this paper, we examine biases in Japanese monolingual embeddings and attempt to mitigate gender bias as a case study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Word embedding biases in languages other than English",
"sec_num": "2.2"
},
{
"text": "Japanese and Western languages use different types of characters. There are three types of characters in Japanese language: phonetic hiragana * , katakana, and ideographic kanji. Embeddings of kanji may capture not only the meaning of the word but also the meaning of the characters. For example word \u6570\u5b66 (\"maths\") consists of two ideograms: \u6570 (\"number\") and \u5b66 (\"learning\"). Katakana often represents a foreign word \u30d7\u30ed\u30b0\u30e9\u30de (\"programmer\"), while words written in rounded shape of hiragana like \u3075\u308f \u3075\u308f (fuwafuwa, \"fluffy\") are often associated with a feminine image (Iwahara et al., 2003) .",
"cite_spans": [
{
"start": 561,
"end": 583,
"text": "(Iwahara et al., 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Specificity of Japanese language",
"sec_num": "3"
},
{
"text": "In this section we explain word embeddings we used, describe UBE (Swinger et al., 2019) used in the bias analysis experiment, and two other methods (Hard Debias (Bolukbasi et al., 2016) , Double-Hard Debias (Wang et al., 2020) ) used in the bias mitigation experiment. Finally, we explain our evaluation methodology.",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "(Swinger et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 161,
"end": 185,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 207,
"end": 226,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "As the target of our analysis we use two publicly available embeddings: word2vec (Mikolov et al., 2013) trained on Japanese Wikipedia (Suzuki et al., 2018) \u2020 and fastText (Bojanowski et al., 2016) trained on the Wikipedia text and Common Crawl. Number of dimensions in these embeddings is 200 and 300, respectively.",
"cite_spans": [
{
"start": 81,
"end": 103,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF13"
},
{
"start": 171,
"end": 196,
"text": "(Bojanowski et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word embeddings for experiments",
"sec_num": "4.1"
},
{
"text": "We use 50,000 most frequent words (Bolukbasi et al., 2016) and also limit the words to be assessed for bias to nouns, verbs, adjectives, adjectival verbs and adverbs in their dictionary forms using morphological analyzer Juman++ (Morita et al., 2015) .",
"cite_spans": [
{
"start": 34,
"end": 58,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 229,
"end": 250,
"text": "(Morita et al., 2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word embeddings for experiments",
"sec_num": "4.1"
},
{
"text": "Unsupervised Bias Enumeration (UBE) In this subsection, we introduce procedural steps of UBE which is a method to detect various biases in embeddings using names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias analysis experiment",
"sec_num": "4.2"
},
{
"text": "As the first step, we filter out possibly problematic first names. In many languages there are polysemous first names such as \"May\" in English (name of a month). Also in Japanese first names that have other meaning, such as Hoshi (star), can be found. We filter them out because of the ambiguity they tend to bring. Identically to Caliskan et al. 2017, we remove 20% of names with the lowest mean of cosine similarity between a name and all other names. Then, after filtering, the names are clustered with k-means++ (Arthur and Vassilvitskii, 2006) included in scikit-learn library (Pedregosa et al., 2011) . Female and male first names data is borrowed from JMnedict \u2021 . Names being used for both genders are treated as neutral. The number of clusters was experimentally set to 10 in both embeddings (word2vec and fastText). JMnedict also includes foreign surnames. Initially, we were going to exclude them, but we thought that we might be able to find social stereotypes regarding foreigners, so we eventually included their names in the dataset. The results of the filtering are shown in 2019, occupation and food-related clusters were generated for English. We set m to 64 as in their setup, but increase M from 30,000 to 50,000 in order to match the bias mitigation experiment of (Bolukbasi et al., 2016) .",
"cite_spans": [
{
"start": 516,
"end": 548,
"text": "(Arthur and Vassilvitskii, 2006)",
"ref_id": "BIBREF0"
},
{
"start": 582,
"end": 606,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF16"
},
{
"start": 1285,
"end": 1309,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias analysis experiment",
"sec_num": "4.2"
},
{
"text": "Thirdly, each m cluster is further divided into Voronoi sets with a high degree of dot product between a word vector and the vector mean of each name cluster. In this step, all word vectors and name vectors are normalized to size 1. After that, the most relevant words were chosen as t in each Voronoi set, and we set t = 3, following Swinger et al. (2019) . However, if the number of elements in each Voronoi set generated after Voronoi partitioning is smaller than t, all elements are used.",
"cite_spans": [
{
"start": 335,
"end": 356,
"text": "Swinger et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias analysis experiment",
"sec_num": "4.2"
},
{
"text": "Finally, in the fourth step, we compute the WEAT score and p-value. First, we calculate the WEAT score for each cluster of names and the t words included in Voronoi sets and chosen in order of relevance. Next, we calculate the p-value. Following Swinger et al. 2019, we use \"rotational null hypothesis\" for p-value. We multiply each name vector by an uniform Haar random orthogonal matrix and perform the above-described third step identically to how the WEAT score is computed. This is done R = 10, 000 times, and the percentage of times the score is higher than the original score becomes the p-value. Finally, the statistically significant WEATs are outputted. For determining the critical p-value, we follow Swinger et al. 2019, who utilized method of Benjamini and Hochberg (1995) to guarantee an \u03b1 bound on false discovery rate. The \u03b1 is set to 0.05 as in Swinger et al. (2019) .",
"cite_spans": [
{
"start": 756,
"end": 785,
"text": "Benjamini and Hochberg (1995)",
"ref_id": "BIBREF1"
},
{
"start": 862,
"end": 883,
"text": "Swinger et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bias analysis experiment",
"sec_num": "4.2"
},
{
"text": "Our hypothesis is that the use of first names in Japanese does not reflect social stereotypes. As mentioned in Section 3, kanji ideograms have their own specific meanings, and Japanese first names are sometimes given with the intention of expressing the meaning of the kanji. In the case of a name consisting of a single kanji character, its meaning may have a significant impact on the information conveyed by embeddings. Additionally, as mentioned in Section 3, since the usage of embeddings may differ depending on a character type, we assume that such types may have an influence on Japanese embeddings. Therefore, we presume that embeddings of names are unlikely to reflect social stereotypes and that clusters are formed by the character type and the meaning of ideograms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bias analysis experiment",
"sec_num": "4.2"
},
{
"text": "We target bias mitigation for gender bias in Japanese embeddings by using Hard Debias (Bolukbasi et al., 2016) and Double-Hard Debias (Wang et al., 2020) .",
"cite_spans": [
{
"start": 86,
"end": 110,
"text": "(Bolukbasi et al., 2016)",
"ref_id": "BIBREF4"
},
{
"start": 134,
"end": 153,
"text": "(Wang et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gender bias mitigation",
"sec_num": "4.3"
},
{
"text": "Hard Debias Hard Debias is a method for bias mitigation by removing gender direction from gender neutral words. Gender direction is defined in advance as the first principal component of gender definition word pairs. Original Hard Debias (Bolukbasi et al., 2016 ) normalizes a word vector to size 1, but we do not so, because its length can contain important information as pointed out by Ethayarajh et al. (2019) . Mu and Viswanath (2018) , before doing Hard Debias, first centralizing the entire embedding and then removing the dominant principal component of the gender bias. Hard Debias was improved by performing these steps.",
"cite_spans": [
{
"start": 238,
"end": 261,
"text": "(Bolukbasi et al., 2016",
"ref_id": "BIBREF4"
},
{
"start": 389,
"end": 413,
"text": "Ethayarajh et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 416,
"end": 439,
"text": "Mu and Viswanath (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mitigating methods",
"sec_num": "4.3.1"
},
{
"text": "Bolukbasi et al. (2016) defines gender specific words in advance, then uses them in the training data and extends the gender specific words with SVM. However, it has been pointed out that searching for gender specific words using embeddings of the bias mitigation targets poses the problem of not being able to properly classify truly gender specific words (Ethayarajh et al., 2019; Kumar et al., 2020) . For that reason we collect gender specific words using Knowledge Based Classifier (KBC) proposed by Kumar et al. (2020) .",
"cite_spans": [
{
"start": 357,
"end": 382,
"text": "(Ethayarajh et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 383,
"end": 402,
"text": "Kumar et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 505,
"end": 524,
"text": "Kumar et al. (2020)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gender Definition and Specific Words",
"sec_num": "4.3.2"
},
{
"text": "The KBC has been implemented as follows. First we translate the definition words used by Bolukbasi et al. (2016) and use them as gender definition words for Japanese. However, since \"herself\" and other words they utilized do not exist in Japanese embeddings, we instead use, for example, synonyms of \"mother\" to match the number of pairs \u00a7 . Then, for any word w, we check whether the definition of w contains gender definition words or not by using Wordnet (Bond et al., 2012) and ConceptNet (Speer and Havasi, 2013) . If the gender word is present in a definition or node, w is treated as a gender specific word, and if not, it is labelled as a gender neutral word. However, our preliminary experiments showed that some relationships in ConceptNet contained gender bias themselves, so we chose edges for which effects of gender bias were not significant: IsA, PartOf, HasA, Synonym, Antonym, DefinedAs, and MannerOf.",
"cite_spans": [
{
"start": 89,
"end": 112,
"text": "Bolukbasi et al. (2016)",
"ref_id": "BIBREF4"
},
{
"start": 458,
"end": 477,
"text": "(Bond et al., 2012)",
"ref_id": "BIBREF5"
},
{
"start": 493,
"end": 517,
"text": "(Speer and Havasi, 2013)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gender Definition and Specific Words",
"sec_num": "4.3.2"
},
{
"text": "Experiment 1: bias analysis We select the top 12 WEATs with the highest WEAT scores among the output WEATs in the bias analysis experiment and check whether these WEATs reflect social stereotypes. Five illustrative names for each name cluster were used for the evaluation. They are selected using a simple greedy heuristic presented in the original paper (Swinger et al., 2019) . To evaluate whether the output WEATs reflected social stereotypes, we asked seven native Japanese speakers (5 males and 2 females, 19-29 years old) to associate statistically significant cluster of words with one most stereotypically related cluster of names. If WEATs represent a social stereotype, there should be high agreement between WEATs and annotators. Pairs of names/words clusters selected by more than 50% annotators were treated as correct associations (annotation guideline follows Swinger et al. (2019) but no rewards were given to annotators).",
"cite_spans": [
{
"start": 355,
"end": 377,
"text": "(Swinger et al., 2019)",
"ref_id": "BIBREF23"
},
{
"start": 875,
"end": 896,
"text": "Swinger et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "4.4"
},
{
"text": "Experiment 2: mitigating gender bias We evaluate gender bias of the Japanese word embeddings using the neighborhood metric (Gonen and Goldberg, 2019) .",
"cite_spans": [
{
"start": 123,
"end": 149,
"text": "(Gonen and Goldberg, 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "4.4"
},
{
"text": "The neighborhood metric is a measure of bias, which clusters n \u00d7 k words with the largest bias in embedding before mitigation into k clusters by using k-means++, and then evaluates bias level providing the percentage of words belonging to each cluster that is consistent with the original bias. Higher percentage indicates that the word embedding includes a bias. We use the difference in cosine similarity between the word vectors of \"woman\" and \"man\" and between \"she\" and \"he\" as the magnitude of the gender bias. After compressing the data into two dimensions using tSNE (van der Maaten and Hinton, 2008), we perform further clustering also using k-means++. For this experiment, we set k = 2 to evaluate the gender bias related to females and males. We conduct experiments setting n to 100, 500, and 1, 000, following Wang et al. (2020) . Table 3 : Clustering results for the first names and the foreign surnames using fastText (ft) with n = 10 and the illustrative names of each cluster. (k) indicates a katakana word, and bold font represents single kanji names. \"% F\" in the last row indicates female name ratio in the cluster.",
"cite_spans": [
{
"start": 822,
"end": 840,
"text": "Wang et al. (2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 843,
"end": 850,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "4.4"
},
{
"text": "w2v F0 w2v F1 w2v F2 w2v F3 w2v F4 w2v F5 w2v F6 w2v F7 w2v F8 w2v F9 Hiroji Shinzaemon Kyoko Kasumi Kotaro Yomogi Yu Rie (h) Yukino (h) Etsu Akiko Ikurumi Mai Suzu Akari Mari Syu Chika (h) Juri (h) Ryo Asuka Noriaki Sachiko Mine Tomihisa Satsuki Shichiro Akio (h) Yae (h) Itsuki Shigetaka Toriha Sekiko Usagi Zyotaro Sachi Sada Ura (h) Ao (h) Atsushi Sachino Ayame Kazuki Midori Kiyono (h) Kuon Hisao Kaoru (h) Atsumi (h) Kou +7,712 +97 +417 +113 +1,993 +419 +52 +134 +290 +206 64% F 72% F 77% F 92% F 67% F 85% F 54% F 96% F 98% F 64% F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation methods",
"sec_num": "4.4"
},
{
"text": "The results of clustering names are shown in Figure 1a for word2vec, Figure 1b for fastText, and in Tables 2, 3, correspondingly. There are several possible readings of kanji ideograms for a single Japanese name, but we use only one reading in the tables. Figures 1a and 1b show the overall results of clustering names. Tables 2 and 3 list the illustrative names of each cluster. In work of Swinger et al. (2019) , distinct clusters are generated for both genders. However, as shown in the Figure 1a and Table 2, in the case of Japanese, no clusters of male names are formed from word2vec embedding and most of the names are clustered in cluster 0. Names in hiragana gathered in clusters 7 and 8. On the other hand, as shown in Figure 1b and Table 3 , male names are grouped in cluster 7 when fastText is used. In both word embeddings, clusters of single kanji ideograms (3, 5, 6 and 9 on word2vec and 6 on fastText) and female names ending with \"-ko\" (cluster 2 on word2vec, cluster 9 on fastText) were formed. We can observe that each cluster captures some distinctive characteristics, but all of them are formed rather by the character type or number of characters, not by features that reflect social stereotypes.",
"cite_spans": [
{
"start": 391,
"end": 412,
"text": "Swinger et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 45,
"end": 54,
"text": "Figure 1a",
"ref_id": "FIGREF0"
},
{
"start": 69,
"end": 78,
"text": "Figure 1b",
"ref_id": "FIGREF0"
},
{
"start": 256,
"end": 273,
"text": "Figures 1a and 1b",
"ref_id": "FIGREF0"
},
{
"start": 320,
"end": 334,
"text": "Tables 2 and 3",
"ref_id": "TABREF2"
},
{
"start": 490,
"end": 499,
"text": "Figure 1a",
"ref_id": "FIGREF0"
},
{
"start": 728,
"end": 737,
"text": "Figure 1b",
"ref_id": "FIGREF0"
},
{
"start": 742,
"end": 749,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 1: bias analysis",
"sec_num": "5.1"
},
{
"text": "The top 12 WEATs outputted by UBE are shown in Tables 4, 5. Table 4 illustrates the results of UBE on word2vec and Table 5 on fastText. The fastText lexicon contains a number of uninterpretable parts of words that could not be removed by the morphological analyzer Juman++, and we enclosed them in quotes. The colored background indicates cases where the annotators agreed with WEATs that the words reflect social stereotypes of the names. As far as Tables 4 and 5 are concerned, we can observe that most WEATs fail to capture social stereotypes (15% agreement for word2vec, 24% for fastText).",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 67,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 115,
"end": 122,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experiment 1: bias analysis",
"sec_num": "5.1"
},
{
"text": "The results of experiments using the neighborhood metric are shown in Tables 6 and 7. The tSNE visualization is shown in Figures 2 and 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 136,
"text": "Figures 2 and 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2: mitigating gender bias",
"sec_num": "5.2"
},
{
"text": "Regardless of which pair (\"she/he\" or \"women/men\") is used to evaluate the size of the gender bias in Japanese word2vec embedding, neither Hard Debias nor Double-Hard Debias come close to sufficient Table 3 . (h) indicates a hiragana word, (k) stands for a katakana. All other words are written in kanji ideograms except ones in quotation -they are uninterpretable parts of words (parser noise). Orange cells indicate the clusters of names and words selected by more than 50% annotators matches the generated WEAT. mitigation of the gender bias. Also when fastText is used, neither of the bias mitigation methods is able to effectively mitigate the gender bias in the \"she/he\" case. However, when \"woman/man\" were used, gender bias could not be confirmed even before mitigating bias using the neighborhood metric.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2: mitigating gender bias",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Ito (h), Hida (h) (place), 'koji' (h) kid (h), crab (h), burnt (h) 'koru' (h), 'ri' (h), 'puri' (h) connection, detail, cut off 'rero' (h), line (h), feeling (h) to (old unit of volume), disaster, I",
"eq_num": "(onore)"
}
],
"section": "Experiment 2: mitigating gender bias",
"sec_num": "5.2"
},
{
"text": "Based on our experimental results, it is difficult to say that WEATs reflect social stereotypes. This supports our hypothesis that Japanese first name embeddings do not reflect social stereotypes. However, Ethayarajh et al. (2019) noticed that WEAT systematically overestimates the bias. We need to examine their findings in the future. As mentioned in Section 5.2, each cluster of names is formed by the character type, which also supports our hypothesis that clusters are formed by the surface characteristics of Japanese language, not by the meaning. However, clusters are not formed by the meaning of kanji included in the names themselves. Particularly, our hypothesis that the clustering would be affected by a single kanji character was not supported by the experimental results. Rather than single ideograms, the single kanji character names are grouped, and we were able to confirm that clusters were not formed by the meaning of these characters. We also confirmed that clusters of three or more character names were created (\"ft F1\" cluster in Table 3 ). Foreign surnames also did not form their own clusters, but were grouped into the element-richest clusters. Therefore, our experimental results show that name embeddings form concentric circles of names merely from superficial information of character type and number of characters rather than meaning, gender or nationality.",
"cite_spans": [
{
"start": 206,
"end": 230,
"text": "Ethayarajh et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1055,
"end": 1062,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 1: bias analysis",
"sec_num": "6.1"
},
{
"text": "Based on the above considerations, it can be said that Japanese first name embeddings do not contain much of social stereotypes, and the similarity between name and word vectors are affected by character types of a word rather than the meaning of the word itself. We think that the fact that WEATs failed to reflect social stereotypes is because the main information conveyed by name and word embeddings is mostly superficial. Swinger et al. (2019) express their concern about the difficulty of applying UBE with respect to groups that cannot be significantly distinguished by name. The results of our experiment support that speculation.",
"cite_spans": [
{
"start": 427,
"end": 448,
"text": "Swinger et al. (2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 1: bias analysis",
"sec_num": "6.1"
},
{
"text": "Gonen and Goldberg (2019) showed experimentally that Hard Debias fails to mitigate the gender bias when the neighborhood metric is used. We replicated this phenomenon in Japanese word embeddings. According to Wang et al. (2020) Table 6 : Experimental results on the neighborhood metric in the case of \"she\" and \"he\". The accuracy of the metric after dimensionality reduction with tSNE is shown in parentheses. : Experimental results on the neighborhood metric in the case of \"women\" and \"men\". The accuracy of the metric after dimensionality reduction by tSNE is shown in parentheses.",
"cite_spans": [
{
"start": 209,
"end": 227,
"text": "Wang et al. (2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment 2: mitigating gender bias",
"sec_num": "6.2"
},
{
"text": "metric when targeting English GloVe and word2vec (results of the latter only shown in their Appendix). However, the bias could not be sufficiently mitigated in Japanese embeddings by using their method \u00b6 . One of the reasons might be related to the way how the gender definition words are predefined in those methods. Ethayarajh et al. (2019) comment on the results of Gonen and Goldberg (2019) stating that Hard Debias removes only the components of the predefined gender direction, and that it is impossible to remove other undefined components of the gender direction. Their conclusion is that even if one mitigates the bias with non-exhaustive gender definition word pairs, potential gender directions remain (Ethayarajh et al., 2019 (Ethayarajh et al., , p.1699 .",
"cite_spans": [
{
"start": 318,
"end": 342,
"text": "Ethayarajh et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 713,
"end": 737,
"text": "(Ethayarajh et al., 2019",
"ref_id": "BIBREF8"
},
{
"start": 738,
"end": 766,
"text": "(Ethayarajh et al., , p.1699",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: mitigating gender bias",
"sec_num": "6.2"
},
{
"text": "We think this is true even if we remove the dominant principal components and make the embedding space isotropic, so the same criticism applies to Double-Hard Debias. In our opinion, the experimental results presented in this paper indicate that the list of gender definition word pairs we used was not sufficient to mitigate the gender bias. This poses the following problem. The number and types of words for gender naturally vary from language to language. Depending on the language, the exhaustive set of gender definition word pairs will differ. Also, the gender direction affecting the downstream task is not guaranteed to be identifiable or known a priori by simply using gender definition words translated from English. Therefore, it will be generally difficult to provide a comprehensive set of gender definition word pairs, suitable for downstream tasks, especially working with languages of a small NLP research population and limited resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment 2: mitigating gender bias",
"sec_num": "6.2"
},
{
"text": "In this paper, we analyzed the representational bias of Japanese word embeddings and attempted to mitigate the gender bias in these embeddings with previous methods developed for English. The experimental results showed that Japanese first name embeddings do not include social stereotypes and that the similarity of word vectors is influenced by the superficial information of character type. And, the existing gender bias mitigation methods did not sufficiently mitigate the gender bias in Japanese embeddings. These results suggest that it is difficult to generalize the previous methods for English to Japanese. This, in turn, may be suggesting that it could be difficult to apply those methods not only to Japanese, therefore it is important to consider whether and how they can be used to analyze and mitigate bias in other languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In the future, we will develop methods for bias analysis and of bias mitigation specifically dedicated to Japanese language. We will also examine the generalizability of other existing methods, and try to answer remaining question: what are the meta-conditions for a method to be independent of a language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "\u00a7 Definitional word pairs we used are: [\"woman\", \"man\"], [\"female\", \"male\" (gender)], [\"female\", \"male\" (sex)], [\"girl\", \"boy\"], [\"little girl\", \"little boy\"], [\"mother\", \"father\"], [\"mother parent\", \"father parent\"], [\"daughter\", \"son\"], [\"she\", \"he\"],[\"Hanako\", \"Taro\"]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "\u00b6 Unfortunately, there is no pre-trained GloVe model available for Japanese, so we were not able to investigate the influence of the embedding type.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "k-means++: The advantages of careful seeding",
"authors": [
{
"first": "David",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Sergei",
"middle": [],
"last": "Vassilvitskii",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Arthur and Sergei Vassilvitskii. 2006. k-means++: The advantages of careful seeding. Technical Report 2006-13, Stanford InfoLab, June.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Benjamini",
"suffix": ""
},
{
"first": "Yosef",
"middle": [],
"last": "Hochberg",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of the Royal Statistical Society: Series B (Methodological)",
"volume": "57",
"issue": "1",
"pages": "289--300",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoav Benjamini and Yosef Hochberg. 1995. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. Journal of the Royal Statistical Society: Series B (Methodological), 57(1):289- 300.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Language (technology) is power: A critical survey of \"bias\" in NLP",
"authors": [
{
"first": "",
"middle": [],
"last": "Su Lin",
"suffix": ""
},
{
"first": "Solon",
"middle": [],
"last": "Blodgett",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Barocas",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "5454--5476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Su Lin Blodgett, Solon Barocas, Hal Daum\u00e9 III, and Hanna Wallach. 2020. Language (technology) is power: A critical survey of \"bias\" in NLP. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 5454-5476, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1607.04606"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Man is to computer programmer as woman is to homemaker? debiasing word embeddings",
"authors": [
{
"first": "Tolga",
"middle": [],
"last": "Bolukbasi",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Venkatesh",
"middle": [],
"last": "Saligrama",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16",
"volume": "",
"issue": "",
"pages": "4356--4364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, and Adam Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Proceedings of the 30th Interna- tional Conference on Neural Information Processing Systems, NIPS'16, page 4356-4364, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Japanese semcor: A sensetagged corpus of japanese",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Fothergill",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 6th global WordNet conference (GWC 2012)",
"volume": "",
"issue": "",
"pages": "56--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis Bond, Timothy Baldwin, Richard Fothergill, and Kiyotaka Uchimoto. 2012. Japanese semcor: A sense- tagged corpus of japanese. In Proceedings of the 6th global WordNet conference (GWC 2012), pages 56-63. Citeseer.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semantics derived automatically from language corpora contain human-like biases",
"authors": [
{
"first": "Aylin",
"middle": [],
"last": "Caliskan",
"suffix": ""
},
{
"first": "Joanna",
"middle": [
"J"
],
"last": "Bryson",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Narayanan",
"suffix": ""
}
],
"year": 2017,
"venue": "Science",
"volume": "356",
"issue": "6334",
"pages": "183--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aylin Caliskan, Joanna J Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183-186.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The trouble with bias. Keynote at Neural Information Processing Systems (NIPS'17)",
"authors": [
{
"first": "Kate",
"middle": [],
"last": "Crawford",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kate Crawford. 2017. The trouble with bias. Keynote at Neural Information Processing Systems (NIPS'17).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Understanding undesirable word embedding associations",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Duvenaud",
"suffix": ""
},
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1696--1705",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding undesirable word embedding asso- ciations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1696-1705, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them",
"authors": [
{
"first": "Hila",
"middle": [],
"last": "Gonen",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "609--614",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609-614, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The effects of a sense of compatibility between type of script and word in written japanese",
"authors": [
{
"first": "Akihiko",
"middle": [],
"last": "Iwahara",
"suffix": ""
},
{
"first": "Takeshi",
"middle": [],
"last": "Hatta",
"suffix": ""
},
{
"first": "Aiko",
"middle": [],
"last": "Maehara",
"suffix": ""
}
],
"year": 2003,
"venue": "Reading and Writing",
"volume": "16",
"issue": "4",
"pages": "377--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akihiko Iwahara, Takeshi Hatta, and Aiko Maehara. 2003. The effects of a sense of compatibility between type of script and word in written japanese. Reading and Writing, 16(4):377-397.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Nurse is closer to woman than surgeon? mitigating gender-biased proximities in word embeddings",
"authors": [
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Tenzin",
"middle": [],
"last": "Singhay Bhotia",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vaibhav Kumar, Tenzin Singhay Bhotia, Vaibhav Kumar, and Tanmoy Chakraborty. 2020. Nurse is closer to woman than surgeon? mitigating gender-biased proximities in word embeddings.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Does gender matter? towards fairness in dialogue systems",
"authors": [
{
"first": "Haochen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jamell",
"middle": [],
"last": "Dacon",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Wenqi Fan",
"suffix": ""
},
{
"first": "Zhiwei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiliang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": 2019,
"venue": "arXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haochen Liu, Jamell Dacon, Wenqi Fan, H. Liu, Zhiwei Liu, and Jiliang Tang. 2019. Does gender matter? towards fairness in dialogue systems. arXiv, abs/1910.10486.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of the 26th International Conference on Neural Information Processing Systems -Volume 2, NIPS'13, page 3111-3119, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Morphological analysis for unsegmented languages using recurrent neural network language model",
"authors": [
{
"first": "Hajime",
"middle": [],
"last": "Morita",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2292--2297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hajime Morita, Daisuke Kawahara, and Sadao Kurohashi. 2015. Morphological analysis for unsegmented lan- guages using recurrent neural network language model. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2292-2297, Lisbon, Portugal, September. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "All-but-the-top: Simple and effective post-processing for word representations",
"authors": [
{
"first": "Jiaqi",
"middle": [],
"last": "Mu",
"suffix": ""
},
{
"first": "Pramod",
"middle": [],
"last": "Viswanath",
"suffix": ""
}
],
"year": 2018,
"venue": "6th International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiaqi Mu and Pramod Viswanath. 2018. All-but-the-top: Simple and effective post-processing for word represen- tations. In 6th International Conference on Learning Representations, ICLR 2018.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Scikitlearn: Machine learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit- learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Debiasing gender biased hindi words with word-embedding",
"authors": [
{
"first": "K",
"middle": [],
"last": "Arun",
"suffix": ""
},
{
"first": "Ansh",
"middle": [],
"last": "Pujari",
"suffix": ""
},
{
"first": "Anshuman",
"middle": [],
"last": "Mittal",
"suffix": ""
},
{
"first": "Anshul",
"middle": [],
"last": "Padhi",
"suffix": ""
},
{
"first": "Mukesh",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Vikas",
"middle": [],
"last": "Jadon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence",
"volume": "2019",
"issue": "",
"pages": "450--456",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arun K. Pujari, Ansh Mittal, Anshuman Padhi, Anshul Jain, Mukesh Jadon, and Vikas Kumar. 2019. Debiasing gender biased hindi words with word-embedding. In Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence, ACAI 2019, page 450-456, New York, NY, USA. Associa- tion for Computing Machinery.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Gender bias in word embeddings of different languages",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thijs Raijmakers. 2020. Gender bias in word embeddings of different languages.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Gender bias in pretrained Swedish embeddings",
"authors": [
{
"first": "Magnus",
"middle": [],
"last": "Sahlgren",
"suffix": ""
},
{
"first": "Fredrik",
"middle": [],
"last": "Olsson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "35--43",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Magnus Sahlgren and Fredrik Olsson. 2019. Gender bias in pretrained Swedish embeddings. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 35-43, Turku, Finland, September-October. Link\u00f6ping University Electronic Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "ConceptNet 5: A large semantic network for relational knowledge",
"authors": [
{
"first": "Robyn",
"middle": [],
"last": "Speer",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Havasi",
"suffix": ""
}
],
"year": 2013,
"venue": "The People's Web Meets NLP",
"volume": "",
"issue": "",
"pages": "161--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robyn Speer and Catherine Havasi. 2013. ConceptNet 5: A large semantic network for relational knowledge. In The People's Web Meets NLP, pages 161-176. Springer.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Mitigating gender bias in natural language processing: Literature review",
"authors": [
{
"first": "Tony",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Gaut",
"suffix": ""
},
{
"first": "Shirlyn",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Yuxin",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mai",
"middle": [],
"last": "Elsherief",
"suffix": ""
},
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Diba",
"middle": [],
"last": "Mirza",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Belding",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "William",
"middle": [
"Yang"
],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1630--1640",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural language processing: Litera- ture review. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1630-1640, Florence, Italy, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A joint neural model for fine-grained named entity classification of wikipedia articles",
"authors": [
{
"first": "Masatoshi",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Koji",
"middle": [],
"last": "Matsuda",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2018,
"venue": "IEICE Transactions on Information and Systems",
"volume": "",
"issue": "",
"pages": "73--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masatoshi Suzuki, Koji Matsuda, Satoshi Sekine, Naoaki Okazaki, and Kentaro Inui. 2018. A joint neural model for fine-grained named entity classification of wikipedia articles. IEICE Transactions on Information and Sys- tems, E101.D(1):73-81.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "What are the biases in my word embedding?",
"authors": [
{
"first": "Nathaniel",
"middle": [],
"last": "Swinger",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "De-Arteaga",
"suffix": ""
},
{
"first": "Neil",
"middle": [
"Thomas"
],
"last": "Heffernan",
"suffix": ""
},
{
"first": "I",
"middle": [
"V"
],
"last": "Mark",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Leiserson",
"suffix": ""
},
{
"first": "Adam Tauman",
"middle": [],
"last": "Kalai",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19",
"volume": "",
"issue": "",
"pages": "305--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan IV, Mark DM Leiserson, and Adam Tauman Kalai. 2019. What are the biases in my word embedding? In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES '19, page 305-311, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Visualizing data using t-SNE",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Double-hard debias: Tailoring word embeddings for gender bias mitigation",
"authors": [
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Victoria Lin",
"suffix": ""
},
{
"first": "Nazneen",
"middle": [],
"last": "Fatema Rajani",
"suffix": ""
},
{
"first": "Bryan",
"middle": [],
"last": "Mccann",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianlu Wang, Xi Victoria Lin, Nazneen Fatema Rajani, Bryan McCann, Vicente Ordonez, and Caiming Xiong. 2020. Double-hard debias: Tailoring word embeddings for gender bias mitigation. In Proceedings of the 58th",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "5443--5453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 5443-5453, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Gender bias in coreference resolution: Evaluation and debiasing methods",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Tianlu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Yatskar",
"suffix": ""
},
{
"first": "Vicente",
"middle": [],
"last": "Ordonez",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "2",
"issue": "",
"pages": "15--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15-20, New Orleans, Louisiana, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning gender-neutral word embeddings",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Yichao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zeyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4847--4853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai-Wei Chang. 2018b. Learning gender-neutral word embed- dings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847-4853, Brussels, Belgium, October-November. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Gender bias in multilingual embeddings and cross-lingual transfer",
"authors": [
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Subhabrata",
"middle": [],
"last": "Mukherjee",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Hosseini",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"Hassan"
],
"last": "Chang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Awadallah",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2896--2907",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jieyu Zhao, Subhabrata Mukherjee, saghar Hosseini, Kai-Wei Chang, and Ahmed Hassan Awadallah. 2020. Gen- der bias in multilingual embeddings and cross-lingual transfer. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2896-2907, Online, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Examining gender bias in languages with grammatical gender",
"authors": [
{
"first": "Pei",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Weijia",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jieyu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Kuan-Hao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Muhao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Kai-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5276--5284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with grammatical gender. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5276-5284, Hong Kong, China, November. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Results of clustering names by first names and foreign surnames with n = 10"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "tSNE visualisation of the top 500 words in the case of \"she\" and \"he\". Graphs (a-c) show the results for word2vec. Graphs (d-f) show the results for fastText. tSNE visualisation of the top 500 words in the case of \"women\" and \"men\". Graphs (a-c) show the results for word2vec, (d-f) for fastText. Clusters in 3d-3f are not separated, so gender bias is not visible."
},
"TABREF0": {
"type_str": "table",
"content": "<table><tr><td>Embeddings</td><td>Neutral first</td><td>Female first</td><td colspan=\"3\">Male first name Foreign surname Total</td></tr><tr><td/><td>name</td><td>name</td><td/><td/><td/></tr><tr><td>word2vec</td><td>302</td><td>7,750</td><td>2,714</td><td>717</td><td>11,483</td></tr><tr><td>fastText</td><td>319</td><td>8,439</td><td>2,585</td><td>558</td><td>11,901</td></tr></table>",
"text": "",
"html": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "The number of names after filtering Secondly, we cluster the words which are included in the most frequent M tokens into clusters of m words. In work of Swinger et al.",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"content": "<table><tr><td>ft F0</td><td>ft F1</td><td>ft F2</td><td>ft F3</td><td>ft F4</td><td>ft F5</td><td>ft F6</td><td>ft F7</td><td>ft F8</td><td>ft F9</td></tr><tr><td>Sachio</td><td>Yumie</td><td>Fuyu</td><td>Mitsuki</td><td>Kaede</td><td>Mana</td><td>Hiro</td><td>Masato</td><td>Ayano</td><td>Miyoko</td></tr><tr><td>Katsuyo</td><td>Kikue</td><td colspan=\"3\">Akiho Yoshino Teruka</td><td colspan=\"2\">Kaori Akira</td><td>Eiichi</td><td>Matsue</td><td>Harue</td></tr><tr><td colspan=\"3\">Takashige Mitsuki (k) Raiko</td><td>Arisu</td><td>Kikyou</td><td>Ena</td><td>Kei</td><td>Yoshihiro</td><td>Nao</td><td>Kazuko</td></tr><tr><td>Yoshimi</td><td colspan=\"2\">Jewison (k) Takie</td><td>Yuuki</td><td>Midori</td><td>Yuki</td><td>Akane</td><td>Kenji</td><td>Hiroyasu</td><td>Akie</td></tr><tr><td>Sukeichi</td><td>Yurie</td><td>Ruuku</td><td>Ebiko</td><td colspan=\"2\">Tsukuyo Nana</td><td>Ken</td><td>Kano</td><td>Chiho</td><td>Katsuko</td></tr><tr><td>+1,301</td><td>+3,978</td><td>+1827</td><td>+377</td><td>+940</td><td>+913</td><td>+456</td><td>+901</td><td>+522</td><td>+636</td></tr><tr><td>50% F</td><td>61% F</td><td colspan=\"2\">92% F 98% F</td><td colspan=\"3\">95% F 96% F 61% F</td><td>19% F</td><td>93% F</td><td>91% F</td></tr></table>",
"text": "Clustering results of the first names and the foreign surnames using word2vec (w2v) with n = 10 and the illustrative names of each cluster. (h) indicates a hiragana word, and bold font represents single kanji names. \"% F\" in the last row indicates female name ratio in the cluster.",
"html": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"content": "<table><tr><td>ft F0</td><td>ft F1</td><td>ft F2</td><td>ft F3</td><td>ft F4</td><td>ft F5</td><td>ft F6</td><td>ft F7</td><td>ft F8</td><td>ft F9</td></tr><tr><td>director,</td><td>sweet</td><td>go (h), get</td><td>Iceland (k),</td><td>orange (k),</td><td>bikini</td><td/><td>Yukio, Yuji</td><td>Nuremberg</td><td>career</td></tr><tr><td>investigate,</td><td>novel</td><td>up (h), feel</td><td>Toulouse</td><td>leaf, the</td><td>model (k),</td><td/><td>(h), factory</td><td>(k) (place),</td><td>woman (k),</td></tr><tr><td>assistant</td><td>comic (k),</td><td>(h)</td><td>(k),</td><td>Milky Way</td><td>girl, idol</td><td/><td/><td>Hitachi-</td><td>wife, lady</td></tr><tr><td>professor</td><td>comedian,</td><td/><td>America</td><td/><td>(k)</td><td/><td/><td>naka (h)</td><td>(k)</td></tr><tr><td/><td>Kaiseisha</td><td/><td>(k)</td><td/><td/><td/><td/><td>(place),</td><td/></tr><tr><td/><td>(company)</td><td/><td/><td/><td/><td/><td/><td>Okhotsk</td><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>(k)</td><td/></tr><tr><td/><td/><td>dry, be</td><td>enough,</td><td>somehow,</td><td>erotic (k),</td><td>split, I</td><td>leading</td><td/><td/></tr><tr><td/><td/><td>dazzled,</td><td>very (h),</td><td>all year</td><td>cute, look</td><td>(ware), too</td><td>person, go</td><td/><td/></tr><tr><td/><td/><td>mold</td><td>excellent</td><td>round,</td><td>like a</td><td>much</td><td>through (h),</td><td/><td/></tr><tr><td/><td/><td/><td/><td>hirahira (h)</td><td>grown-up</td><td/><td>plan</td><td/><td/></tr><tr><td/><td/><td/><td/><td>(fluttering)</td><td/><td/><td/><td/><td/></tr><tr><td/><td>Joseph (k),</td><td/><td/><td>aurora (k),</td><td>Erina (k),</td><td/><td>Hiroshi (k),</td><td>Yawatahama,</td><td>Toru (k),</td></tr><tr><td/><td>Norman</td><td/><td/><td>Laguna (k),</td><td>Emily (k),</td><td/><td>Kenji (k),</td><td>Dazaifu,</td><td>Susan (k),</td></tr><tr><td/><td>(k), Harry</td><td/><td/><td>acacia (k)</td><td>Lilly (k)</td><td/><td>Ministry of</td><td>Wakayama</td><td>Takeshi (k)</td></tr><tr><td/><td>(k)</td><td/><td/><td/><td/><td/><td>Transport</td><td>(places)</td><td/></tr><tr><td/><td/><td/><td>stone wall,</td><td>enjoyment</td><td/><td>Horse (old</td><td/><td>equator,</td><td/></tr><tr><td/><td/><td/><td>imperial</td><td>of the</td><td/><td>orthogra-</td><td/><td>Okinawa</td><td/></tr><tr><td/><td/><td/><td>guards,</td><td>moon, wild</td><td/><td>phy),</td><td/><td>(place),</td><td/></tr><tr><td/><td/><td/><td>Chika-</td><td>cherry tree,</td><td/><td>stipend,</td><td/><td>Kyushu</td><td/></tr><tr><td/><td/><td/><td>matsu</td><td>Japanese</td><td/><td>vivid</td><td/><td>(place)</td><td/></tr><tr><td/><td/><td/><td/><td>apricot</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td>with red</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td>blossoms</td><td/><td/><td/><td/><td/></tr></table>",
"text": "The top 12 highest-scoring WEATs output (statistically significant) by UBE on word2vec. 'w2v F' indicate the cluster inTable 2. (h) indicates a hiragana word, (k) stands for a katakana. All other words are written in kanji ideograms except ones in quotation -they are uninterpretable parts of words (noise). Orange cells indicate the clusters of names and words selected by more than 50% annotators matches the generated WEAT.",
"html": null,
"num": null
},
"TABREF6": {
"type_str": "table",
"content": "<table/>",
"text": "The top 12 highest-scoring WEATs output (statistically significant) by UBE on fastText. 'ft F' indicate the cluster in",
"html": null,
"num": null
},
"TABREF7": {
"type_str": "table",
"content": "<table><tr><td>Embedding</td><td>Method</td><td>Top 100</td><td>Top 500</td><td>Top 1000</td></tr><tr><td/><td>Original</td><td>1.00 (1.00)</td><td>1.000 (0.994)</td><td>1.000 (0.999)</td></tr><tr><td>word2vec</td><td>Hard Debias</td><td>1.00 (1.00)</td><td>0.995 (0.992)</td><td>0.993 (0.988)</td></tr><tr><td/><td>Double-Hard</td><td>1.00 (1.00)</td><td>0.960 (0.978)</td><td>0.933 (0.967)</td></tr><tr><td/><td>Debias</td><td/><td/><td/></tr><tr><td/><td>Original</td><td>1.0 (1.00)</td><td>0.753 (0.972)</td><td>0.594 (0.959)</td></tr><tr><td>fastText</td><td>Hard Debias</td><td>0.99 (1.00)</td><td>0.607 (0.974)</td><td>0.593 (0.976)</td></tr><tr><td/><td>Double-Hard</td><td>0.99 (1.00)</td><td>0.607 (0.973)</td><td>0.592 (0.958)</td></tr><tr><td/><td>Debias</td><td/><td/><td/></tr></table>",
"text": ", Double-Hard Debias can mitigate gender bias with the neighborhood",
"html": null,
"num": null
},
"TABREF9": {
"type_str": "table",
"content": "<table/>",
"text": "",
"html": null,
"num": null
}
}
}
}